2025-06-08 17:43:29,409 [ 328085 ] INFO : ClickHouse root is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse (runner:42, check_args_and_update_paths) 2025-06-08 17:43:29,409 [ 328085 ] INFO : Cases dir is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration (runner:86, check_args_and_update_paths) 2025-06-08 17:43:29,409 [ 328085 ] INFO : utils dir is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse/utils (runner:97, check_args_and_update_paths) 2025-06-08 17:43:29,409 [ 328085 ] INFO : base_configs_dir: /home/ubuntu/_work/ClickHouse/ClickHouse/programs/server, binary: /home/ubuntu/_work/_temp/test/build/clickhouse, cases_dir: /home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration (runner:99, check_args_and_update_paths) clickhouse_integration_tests_volume Running pytest container as: 'docker run --rm --name clickhouse_integration_tests_jhoknn --privileged --dns-search='.' --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-odbc-bridge:/clickhouse-odbc-bridge --volume=/home/ubuntu/_work/_temp/test/build/clickhouse:/clickhouse --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-library-bridge:/clickhouse-library-bridge --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/programs/server:/clickhouse-config --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration:/ClickHouse/tests/integration --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/backupview:/ClickHouse/utils/backupview --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/grpc-client/pb2:/ClickHouse/utils/grpc-client/pb2 --volume=/run:/run/host:ro --volume=clickhouse_integration_tests_volume:/var/lib/docker -e DOCKER_DOTNET_CLIENT_TAG=11de0b29a15d -e DOCKER_HELPER_TAG=2cffe1eae894 -e DOCKER_BASE_TAG=2993bc2bf171 -e DOCKER_KERBERIZED_HADOOP_TAG=ce74919e88f5 -e DOCKER_KERBEROS_KDC_TAG=9391ecdee8d7 -e DOCKER_MYSQL_GOLANG_CLIENT_TAG=9bec2a638e6e -e DOCKER_MYSQL_JAVA_CLIENT_TAG=766bff31cfe4 -e DOCKER_MYSQL_JS_CLIENT_TAG=41ba7c2ec2a1 -e DOCKER_MYSQL_PHP_CLIENT_TAG=88be89c1e3b6 -e DOCKER_NGINX_DAV_TAG=b55ac9cd7519 -e DOCKER_POSTGRESQL_JAVA_CLIENT_TAG=a4eff5c7f4d6 -e DOCKER_PYTHON_BOTTLE_TAG=a2d3dc777d0c -e DOCKER_CLIENT_TIMEOUT=300 -e COMPOSE_HTTP_TIMEOUT=600 -e PYTHONUNBUFFERED=1 -e PYTEST_ADDOPTS=" -rfEps --run-id=1 --color=no --durations=0 test_merge_tree_load_parts/test.py::test_merge_tree_load_parts_filesystem_error 'test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory0-node]' 'test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory1-node]' 'test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory2-node]' test_postgresql_replica_database_engine_1/test.py::test_abrupt_connection_loss_while_heavy_replication test_postgresql_replica_database_engine_1/test.py::test_abrupt_server_restart_while_heavy_replication test_postgresql_replica_database_engine_1/test.py::test_changing_replica_identity_value test_postgresql_replica_database_engine_1/test.py::test_clickhouse_restart test_postgresql_replica_database_engine_1/test.py::test_concurrent_transactions test_postgresql_replica_database_engine_1/test.py::test_different_data_types test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_all_database_tables test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_subset_of_database_tables test_postgresql_replica_database_engine_1/test.py::test_many_concurrent_queries test_postgresql_replica_database_engine_1/test.py::test_multiple_databases test_postgresql_replica_database_engine_1/test.py::test_quoting_1 test_postgresql_replica_database_engine_1/test.py::test_quoting_2 test_postgresql_replica_database_engine_1/test.py::test_replica_identity_index -vvv" altinityinfra/integration-tests-runner:9d492c2eec24 '. Start tests ============================= test session starts ============================== platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.5.0 -- /usr/bin/python3 cachedir: .pytest_cache rootdir: /ClickHouse/tests/integration configfile: pytest.ini plugins: order-1.0.1, random-0.2, timeout-2.2.0, repeat-0.9.3, reportlog-0.4.0, xdist-3.5.0 timeout: 900.0s timeout method: signal timeout func_only: False collecting ... collected 17 items test_merge_tree_load_parts/test.py::test_merge_tree_load_parts_filesystem_error SKIPPEDlly triggers LOGICAL_ERROR which leads to crash with those builds) [ 5%] test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory0-node] SKIPPED/issues/51152) [ 11%] test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory1-node] SKIPPED/issues/51152) [ 17%] test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory2-node] SKIPPED/issues/51152) [ 23%] test_postgresql_replica_database_engine_1/test.py::test_abrupt_connection_loss_while_heavy_replication FAILED [ 29%] test_postgresql_replica_database_engine_1/test.py::test_abrupt_server_restart_while_heavy_replication FAILED [ 35%] test_postgresql_replica_database_engine_1/test.py::test_changing_replica_identity_value FAILED [ 41%] test_postgresql_replica_database_engine_1/test.py::test_clickhouse_restart FAILED [ 47%] test_postgresql_replica_database_engine_1/test.py::test_concurrent_transactions FAILED [ 52%] test_postgresql_replica_database_engine_1/test.py::test_different_data_types FAILED [ 58%] test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_all_database_tables FAILED [ 64%] test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_subset_of_database_tables FAILED [ 70%] test_postgresql_replica_database_engine_1/test.py::test_many_concurrent_queries FAILED [ 76%] test_postgresql_replica_database_engine_1/test.py::test_multiple_databases FAILED [ 82%] test_postgresql_replica_database_engine_1/test.py::test_quoting_1 FAILED [ 88%] test_postgresql_replica_database_engine_1/test.py::test_quoting_2 FAILED [ 94%] test_postgresql_replica_database_engine_1/test.py::test_replica_identity_index FAILED [100%] =================================== FAILURES =================================== _____________ test_abrupt_connection_loss_while_heavy_replication ______________ started_cluster = def test_abrupt_connection_loss_while_heavy_replication(started_cluster): def transaction(thread_id): if thread_id % 2: conn = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=True, auto_commit=True, ) else: conn = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=True, auto_commit=False, ) cursor = conn.cursor() for query in queries: cursor.execute(query.format(thread_id)) print("thread {}, query {}".format(thread_id, query)) if thread_id % 2 == 0: conn.commit() NUM_TABLES = 6 pg_manager.create_and_fill_postgres_tables(NUM_TABLES, numbers=0) threads_num = 6 threads = [] for i in range(threads_num): threads.append(threading.Thread(target=transaction, args=(i,))) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) for thread in threads: time.sleep(random.uniform(0, 0.5)) thread.start() for thread in threads: thread.join() # Join here because it takes time for data to reach wal time.sleep(2) started_cluster.pause_container("postgres1") # for i in range(NUM_TABLES): # result = instance.query(f"SELECT count() FROM test_database.postgresql_replica_{i}") # print(result) # Just debug started_cluster.unpause_container("postgres1") > check_several_tables_are_synchronized(instance, NUM_TABLES) test_postgresql_replica_database_engine_1/test.py:752: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:419: in check_several_tables_are_synchronized check_tables_are_synchronized( helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.3.3:9000. DB::Exception: Unknown table expression identifier 'test_database.postgresql_replica_0' in scope SELECT * FROM `test_database.postgresql_replica_0` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000001d2e15a3 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000eff24b4 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x0000000007d363fc E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x00000000173a87a6 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x00000000173a162f E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000001739ca9a E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x0000000017862e10 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000001785f7e0 E 8. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:701: std::unique_ptr> std::__function::__policy_invoker> (DB::InterpreterFactory::Arguments const&)>::__call_impl> (DB::InterpreterFactory::Arguments const&)>>(std::__function::__policy_storage const*, DB::InterpreterFactory::Arguments const&) @ 0x0000000017864110 E 9. ./build_docker/./src/Interpreters/InterpreterFactory.cpp:0: DB::InterpreterFactory::get(std::shared_ptr&, std::shared_ptr, DB::SelectQueryOptions const&) @ 0x00000000177cfb6b E 10. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:302: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x0000000017dbf13f E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x0000000017dbac38 E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x00000000195edc92 E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x000000001960c2e8 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000001d16e0a3 E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000001d16e912 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:202: Poco::PooledThread::run() @ 0x000000001d3705c7 E 17. ./build_docker/./base/poco/Foundation/src/Thread.cpp:46: Poco::(anonymous namespace)::RunnableHolder::run() @ 0x000000001d36e890 E 18. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000001d36cd4a E 19. __tsan_thread_start_func @ 0x000000000706b0cf E 20. ? @ 0x00007fcd7c91dac3 E 21. ? @ 0x00007fcd7c9af850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.postgresql_replica_0` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml PostgreSQL is available - running test ------------------------------ Captured log setup ------------------------------ 2025-06-08 17:44:56 [ 496 ] INFO : Running tests in /ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/test.py (cluster.py:2659, start) 2025-06-08 17:44:56 [ 496 ] DEBUG : Cluster start called. is_up=False (cluster.py:2666, start) 2025-06-08 17:44:56 [ 496 ] DEBUG : Docker networks for project roottestpostgresqlreplicadatabaseengine1 are NETWORK ID NAME DRIVER SCOPE (cluster.py:780, print_all_docker_pieces) 2025-06-08 17:44:56 [ 496 ] DEBUG : Docker containers for project roottestpostgresqlreplicadatabaseengine1 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:788, print_all_docker_pieces) 2025-06-08 17:44:56 [ 496 ] DEBUG : Docker volumes for project roottestpostgresqlreplicadatabaseengine1 are DRIVER VOLUME NAME (cluster.py:796, print_all_docker_pieces) 2025-06-08 17:44:56 [ 496 ] DEBUG : Cleanup called (cluster.py:801, cleanup) 2025-06-08 17:44:56 [ 496 ] DEBUG : Docker networks for project roottestpostgresqlreplicadatabaseengine1 are NETWORK ID NAME DRIVER SCOPE (cluster.py:780, print_all_docker_pieces) 2025-06-08 17:44:56 [ 496 ] DEBUG : Docker containers for project roottestpostgresqlreplicadatabaseengine1 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:788, print_all_docker_pieces) 2025-06-08 17:44:56 [ 496 ] DEBUG : Docker volumes for project roottestpostgresqlreplicadatabaseengine1 are DRIVER VOLUME NAME (cluster.py:796, print_all_docker_pieces) 2025-06-08 17:44:56 [ 496 ] DEBUG : Command:docker container list --all --filter name='^/roottestpostgresqlreplicadatabaseengine1_.*_1$' --format '{{.ID}}:{{.Names}}' (cluster.py:113, run_and_check) 2025-06-08 17:44:56 [ 496 ] DEBUG : Unstopped containers: {} (cluster.py:815, cleanup) 2025-06-08 17:44:56 [ 496 ] DEBUG : No running containers for project: roottestpostgresqlreplicadatabaseengine1 (cluster.py:829, cleanup) 2025-06-08 17:44:56 [ 496 ] DEBUG : Trying to prune unused networks... (cluster.py:835, cleanup) 2025-06-08 17:44:56 [ 496 ] DEBUG : Trying to prune unused images... (cluster.py:851, cleanup) 2025-06-08 17:44:56 [ 496 ] DEBUG : Command:['docker', 'image', 'prune', '-f'] (cluster.py:113, run_and_check) 2025-06-08 17:44:56 [ 496 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:121, run_and_check) 2025-06-08 17:44:56 [ 496 ] DEBUG : Images pruned (cluster.py:854, cleanup) 2025-06-08 17:44:56 [ 496 ] DEBUG : Trying to prune unused volumes... (cluster.py:860, cleanup) 2025-06-08 17:44:56 [ 496 ] DEBUG : Command:['docker volume ls | wc -l'] (cluster.py:113, run_and_check) 2025-06-08 17:44:56 [ 496 ] DEBUG : Stdout:1 (cluster.py:121, run_and_check) 2025-06-08 17:44:56 [ 496 ] DEBUG : Setup directory for instance: instance (cluster.py:2679, start) 2025-06-08 17:44:56 [ 496 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4383, create_dir) 2025-06-08 17:44:56 [ 496 ] DEBUG : Create directory for common tests configuration (cluster.py:4388, create_dir) 2025-06-08 17:44:56 [ 496 ] DEBUG : Copy common configuration from helpers (cluster.py:4408, create_dir) 2025-06-08 17:44:56 [ 496 ] DEBUG : Generate and write macros file (cluster.py:4441, create_dir) 2025-06-08 17:44:56 [ 496 ] DEBUG : Copy custom test config files ['/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/configs/log_conf.xml'] to /ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_1/instance/configs/config.d (cluster.py:4471, create_dir) 2025-06-08 17:44:56 [ 496 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_1/instance/database (cluster.py:4488, create_dir) 2025-06-08 17:44:56 [ 496 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_1/instance/logs (cluster.py:4499, create_dir) 2025-06-08 17:44:56 [ 496 ] DEBUG : Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon; coproc tail -f /dev/null; wait $$!" (cluster.py:4582, create_dir) 2025-06-08 17:44:56 [ 496 ] DEBUG : Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'POSTGRES_PORT': '5432', 'POSTGRES_DIR': '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_1/postgres/postgres1', 'POSTGRES_LOGS_FS': 'bind'} stored in /ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_1/.env (cluster.py:86, _create_env_file) 2025-06-08 17:44:56 [ 496 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2025-06-08 17:44:56 [ 496 ] DEBUG : No config file found (config.py:28, find_config_file) 2025-06-08 17:44:56 [ 496 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2025-06-08 17:44:56 [ 496 ] DEBUG : No config file found (config.py:28, find_config_file) 2025-06-08 17:44:56 [ 496 ] DEBUG : http://localhost:None "GET /version HTTP/1.1" 200 826 (connectionpool.py:546, _make_request) 2025-06-08 17:44:56 [ 496 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_1/.env', '--project-name', 'roottestpostgresqlreplicadatabaseengine1', '--file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_1/instance/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml', 'pull'] (cluster.py:113, run_and_check) 2025-06-08 17:45:07 [ 496 ] DEBUG : Stderr:Pulling instance ... (cluster.py:123, run_and_check) 2025-06-08 17:45:07 [ 496 ] DEBUG : Stderr:Pulling postgres1 ... (cluster.py:123, run_and_check) 2025-06-08 17:45:07 [ 496 ] DEBUG : Stderr:Pulling postgres1 ... pulling from library/postgres (cluster.py:123, run_and_check) 2025-06-08 17:45:07 [ 496 ] DEBUG : Stderr:Pulling postgres1 ... digest: sha256:6efd0df010dc3cb40d... (cluster.py:123, run_and_check) 2025-06-08 17:45:07 [ 496 ] DEBUG : Stderr:Pulling postgres1 ... status: image is up to date for p... (cluster.py:123, run_and_check) 2025-06-08 17:45:07 [ 496 ] DEBUG : Stderr:Pulling postgres1 ... done (cluster.py:123, run_and_check) 2025-06-08 17:45:07 [ 496 ] DEBUG : Stderr:Pulling instance ... pulling from altinityinfra/integr... (cluster.py:123, run_and_check) 2025-06-08 17:45:07 [ 496 ] DEBUG : Stderr:Pulling instance ... digest: sha256:8a2c68e2d63d82c826... (cluster.py:123, run_and_check) 2025-06-08 17:45:07 [ 496 ] DEBUG : Stderr:Pulling instance ... status: image is up to date for a... (cluster.py:123, run_and_check) 2025-06-08 17:45:07 [ 496 ] DEBUG : Stderr:Pulling instance ... done (cluster.py:123, run_and_check) 2025-06-08 17:45:07 [ 496 ] DEBUG : Setup Postgres (cluster.py:2791, start) 2025-06-08 17:45:07 [ 496 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_1/.env', '--project-name', 'roottestpostgresqlreplicadatabaseengine1', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml', '--verbose', 'up', '-d'] (cluster.py:113, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.config.config.find: Using configuration files: /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.docker_client.get_client: docker-compose version 1.29.2, build unknown (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:docker-py version: (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:CPython version: 3.10.12 (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:OpenSSL version: OpenSSL 3.0.2 15 Mar 2022 (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.docker_client.get_client: Docker base_url: http+docker://localhost (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.docker_client.get_client: Docker version: Platform={'Name': 'Docker Engine - Community'}, Components=[{'Name': 'Engine', 'Version': '23.0.6', 'Details': {'ApiVersion': '1.42', 'Arch': 'amd64', 'BuildTime': '2023-05-05T21:18:13.000000000+00:00', 'Experimental': 'false', 'GitCommit': '9dbdbd4', 'GoVersion': 'go1.19.9', 'KernelVersion': '5.15.0-130-generic', 'MinAPIVersion': '1.12', 'Os': 'linux'}}, {'Name': 'containerd', 'Version': '1.7.18', 'Details': {'GitCommit': 'ae71819c4f5e67bb4d5ae76a6b735f29cc25774e'}}, {'Name': 'runc', 'Version': '1.7.18', 'Details': {'GitCommit': 'v1.1.13-0-g58aa920'}}, {'Name': 'docker-init', 'Version': '0.19.0', 'Details': {'GitCommit': 'de40ad0'}}], Version=23.0.6, ApiVersion=1.42, MinAPIVersion=1.12, GitCommit=9dbdbd4, GoVersion=go1.19.9, Os=linux, Arch=amd64, KernelVersion=5.15.0-130-generic, BuildTime=2023-05-05T21:18:13.000000000+00:00 (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_network <- ('roottestpostgresqlreplicadatabaseengine1_default') (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker info <- () (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker info -> {'Architecture': 'x86_64', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'BridgeNfIp6tables': True, (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'BridgeNfIptables': True, (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'CPUSet': True, (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'CPUShares': True, (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'CgroupDriver': 'cgroupfs', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'CgroupVersion': '2', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'ContainerdCommit': {'Expected': 'ae71819c4f5e67bb4d5ae76a6b735f29cc25774e', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'ID': 'ae71819c4f5e67bb4d5ae76a6b735f29cc25774e'}, (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'Containers': 0, (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_network <- ('roottestpostgresqlreplicadatabaseengine1_default') (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.network.ensure: Creating network "roottestpostgresqlreplicadatabaseengine1_default" with the default driver (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_network <- (name='roottestpostgresqlreplicadatabaseengine1_default', driver=None, options=None, ipam=None, internal=False, enable_ipv6=False, labels={'com.docker.compose.project': 'roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.network': 'default', 'com.docker.compose.version': '1.29.2'}, attachable=True, check_duplicate=True) (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_network -> {'Id': '1b17dc43bec2e871419c5a45edef5b23adb07c972be0c15cb41b4733bc69eb13', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'Warning': ''} (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=False, filters={'label': ['com.docker.compose.project=roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=False, filters={'label': ['com.docker.compose.project=roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.service=postgres1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.service=postgres1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('postgres') (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'Config': {'AttachStderr': False, (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'Cmd': ['postgres'], (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'Entrypoint': ['docker-entrypoint.sh'], (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'Env': ['PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/postgresql/17/bin', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.service=postgres1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.service=postgres1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: {} (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.parallel.feed_queue: Starting producer thread for (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.service=postgres1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.service=postgres1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:Creating roottestpostgresqlreplicadatabaseengine1_postgres1_1 ... (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: {ServiceName(project='roottestpostgresqlreplicadatabaseengine1', service='postgres1', number=1)} (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.parallel.feed_queue: Starting producer thread for ServiceName(project='roottestpostgresqlreplicadatabaseengine1', service='postgres1', number=1) (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('postgres') (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'Config': {'AttachStderr': False, (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'Cmd': ['postgres'], (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'Entrypoint': ['docker-entrypoint.sh'], (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'Env': ['PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/postgresql/17/bin', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('postgres') (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'Config': {'AttachStderr': False, (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'Cmd': ['postgres'], (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'Entrypoint': ['docker-entrypoint.sh'], (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'Env': ['PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/postgresql/17/bin', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.service.build_container_labels: Added config hash: 9feb8ae4c4f839ef6d94321052b7e6360a3c957f2125293b3a72d12ae3978768 (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_host_config <- (links=[], port_bindings={}, binds=[], volumes_from=[], privileged=False, network_mode='roottestpostgresqlreplicadatabaseengine1_default', devices=None, device_requests=None, dns=None, dns_opt=None, dns_search=None, restart_policy={'Name': 'always', 'MaximumRetryCount': 0}, runtime=None, cap_add=None, cap_drop=None, mem_limit=None, mem_reservation=None, memswap_limit=None, ulimits=None, log_config={'Type': '', 'Config': {}}, extra_hosts=None, read_only=None, pid_mode=None, security_opt=None, ipc_mode=None, cgroup_parent=None, cpu_quota=None, shm_size=None, sysctls=None, pids_limit=None, tmpfs=None, oom_kill_disable=None, oom_score_adj=None, mem_swappiness=None, group_add=None, userns_mode=None, init=None, init_path=None, isolation=None, cpu_count=None, cpu_percent=None, nano_cpus=None, volume_driver=None, cpuset_cpus=None, cpu_shares=None, storage_opt=None, blkio_weight=None, blkio_weight_device=None, device_read_bps=None, device_read_iops=None, device_write_bps=None, device_write_iops=None, mounts=[{'Target': '/postgres/', 'Source': '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_1/postgres/postgres1', 'Type': 'bind', 'ReadOnly': None}], device_cgroup_rules=None, cpu_period=None, cpu_rt_period=None, cpu_rt_runtime=None) (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_host_config -> {'Binds': [], (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'Links': [], (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'LogConfig': {'Config': {}, 'Type': ''}, (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'Mounts': [{'ReadOnly': None, (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'Source': '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_1/postgres/postgres1', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'Target': '/postgres/', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'Type': 'bind'}], (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'NetworkMode': 'roottestpostgresqlreplicadatabaseengine1_default', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'PortBindings': {}, (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'RestartPolicy': {'MaximumRetryCount': 0, 'Name': 'always'}, (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_container <- (command=['postgres', '-c', 'wal_level=logical', '-c', 'max_replication_slots=4', '-c', 'logging_collector=on', '-c', 'log_directory=/postgres/logs', '-c', 'log_filename=postgresql.log', '-c', 'log_statement=all', '-c', 'max_connections=200'], environment=['POSTGRES_HOST_AUTH_METHOD=trust', 'POSTGRES_PASSWORD=mysecretpassword', 'PGDATA=/postgres/data'], healthcheck={'test': ['CMD-SHELL', 'pg_isready -U postgres'], 'interval': 10000000000, 'timeout': 5000000000, 'retries': 5}, image='postgres', volumes={}, name='roottestpostgresqlreplicadatabaseengine1_postgres1_1', detach=True, ports=['5432'], labels={'com.docker.compose.project': 'roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.service': 'postgres1', 'com.docker.compose.oneoff': 'False', 'com.docker.compose.project.working_dir': '/ClickHouse/tests/integration/compose', 'com.docker.compose.project.config_files': '/ClickHouse/tests/integration/compose/docker_compose_postgres.yml', 'com.docker.compose.project.environment_file': '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_1/.env', 'com.docker.compose.container-number': '1', 'com.docker.compose.version': '1.29.2', 'com.docker.compose.config-hash': '9feb8ae4c4f839ef6d94321052b7e6360a3c957f2125293b3a72d12ae3978768'}, host_config={'NetworkMode': 'roottestpostgresqlreplicadatabaseengine1_default', 'RestartPolicy': {'Name': 'always', 'MaximumRetryCount': 0}, 'VolumesFrom': [], 'Binds': [], 'PortBindings': {}, 'Links': [], 'LogConfig': {'Type': '', 'Config': {}}, 'Mounts': [{'Target': '/postgres/', 'Source': '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_1/postgres/postgres1', 'Type': 'bind', 'ReadOnly': None}]}, networking_config={'EndpointsConfig': {'roottestpostgresqlreplicadatabaseengine1_default': {'Aliases': ['postgres1', 'postgre-sql.local'], 'IPAMConfig': {}}}}) (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_container -> {'Id': '690a1ef820d79df409268beab234d58240138bc50171286f67201786bc8843f7', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'Warnings': []} (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('690a1ef820d79df409268beab234d58240138bc50171286f67201786bc8843f7') (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {'AppArmorProfile': '', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'Args': ['postgres', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: '-c', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'wal_level=logical', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: '-c', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'max_replication_slots=4', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: '-c', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'logging_collector=on', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: '-c', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr: 'log_directory=/postgres/logs', (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network <- ('690a1ef820d79df409268beab234d58240138bc50171286f67201786bc8843f7', 'roottestpostgresqlreplicadatabaseengine1_default') (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network -> None (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network <- ('690a1ef820d79df409268beab234d58240138bc50171286f67201786bc8843f7', 'roottestpostgresqlreplicadatabaseengine1_default', aliases=['postgres1', 'postgre-sql.local', '690a1ef820d7'], ipv4_address=None, ipv6_address=None, links=[], link_local_ips=None) (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network -> None (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker start <- ('690a1ef820d79df409268beab234d58240138bc50171286f67201786bc8843f7') (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker start -> None (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.parallel.parallel_execute_iter: Finished processing: ServiceName(project='roottestpostgresqlreplicadatabaseengine1', service='postgres1', number=1) (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:Creating roottestpostgresqlreplicadatabaseengine1_postgres1_1 ... done (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.parallel.parallel_execute_iter: Finished processing: (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:45:08 [ 496 ] DEBUG : get_instance_ip instance_name=postgres1 (cluster.py:2008, get_instance_ip) 2025-06-08 17:45:08 [ 496 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestpostgresqlreplicadatabaseengine1_postgres1_1/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:45:08 [ 496 ] DEBUG : Can't connect to Postgres connection to server at "172.16.3.2", port 5432 failed: Connection refused Is the server running on that host and accepting TCP/IP connections? (cluster.py:2251, wait_postgres_to_start) 2025-06-08 17:45:08 [ 496 ] DEBUG : Can't connect to Postgres connection to server at "172.16.3.2", port 5432 failed: Connection refused Is the server running on that host and accepting TCP/IP connections? (cluster.py:2251, wait_postgres_to_start) 2025-06-08 17:45:09 [ 496 ] DEBUG : Postgres Started (cluster.py:2248, wait_postgres_to_start) 2025-06-08 17:45:09 [ 496 ] DEBUG : ('Trying to create ClickHouse instance by command %s', 'docker-compose --env-file /ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_1/.env --project-name roottestpostgresqlreplicadatabaseengine1 --file /ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_1/instance/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml up -d --no-recreate') (cluster.py:3002, start) 2025-06-08 17:45:09 [ 496 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_1/.env', '--project-name', 'roottestpostgresqlreplicadatabaseengine1', '--file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_1/instance/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml', 'up', '-d', '--no-recreate'] (cluster.py:113, run_and_check) 2025-06-08 17:45:09 [ 496 ] DEBUG : Stderr:Creating roottestpostgresqlreplicadatabaseengine1_instance_1 ... (cluster.py:123, run_and_check) 2025-06-08 17:45:09 [ 496 ] DEBUG : Stderr:Creating roottestpostgresqlreplicadatabaseengine1_instance_1 ... done (cluster.py:123, run_and_check) 2025-06-08 17:45:09 [ 496 ] DEBUG : ClickHouse instance created (cluster.py:3010, start) 2025-06-08 17:45:09 [ 496 ] DEBUG : get_instance_ip instance_name=instance (cluster.py:2008, get_instance_ip) 2025-06-08 17:45:09 [ 496 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestpostgresqlreplicadatabaseengine1_instance_1/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:45:09 [ 496 ] DEBUG : Waiting for ClickHouse start in instance, ip: 172.16.3.3... (cluster.py:3017, start) 2025-06-08 17:45:09 [ 496 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestpostgresqlreplicadatabaseengine1_instance_1/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:45:09 [ 496 ] DEBUG : http://localhost:None "GET /v1.42/containers/a3bfa449559ed2471c4ff686916c5b6ddbfaa166379df208018334c8782c0f28/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:45:09 [ 496 ] DEBUG : http://localhost:None "GET /v1.42/containers/a3bfa449559ed2471c4ff686916c5b6ddbfaa166379df208018334c8782c0f28/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:45:10 [ 496 ] DEBUG : http://localhost:None "GET /v1.42/containers/a3bfa449559ed2471c4ff686916c5b6ddbfaa166379df208018334c8782c0f28/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:45:10 [ 496 ] DEBUG : http://localhost:None "GET /v1.42/containers/a3bfa449559ed2471c4ff686916c5b6ddbfaa166379df208018334c8782c0f28/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:45:10 [ 496 ] DEBUG : http://localhost:None "GET /v1.42/containers/a3bfa449559ed2471c4ff686916c5b6ddbfaa166379df208018334c8782c0f28/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:45:10 [ 496 ] DEBUG : http://localhost:None "GET /v1.42/containers/a3bfa449559ed2471c4ff686916c5b6ddbfaa166379df208018334c8782c0f28/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:45:10 [ 496 ] DEBUG : http://localhost:None "GET /v1.42/containers/a3bfa449559ed2471c4ff686916c5b6ddbfaa166379df208018334c8782c0f28/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:45:10 [ 496 ] DEBUG : http://localhost:None "GET /v1.42/containers/a3bfa449559ed2471c4ff686916c5b6ddbfaa166379df208018334c8782c0f28/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:45:10 [ 496 ] DEBUG : http://localhost:None "GET /v1.42/containers/a3bfa449559ed2471c4ff686916c5b6ddbfaa166379df208018334c8782c0f28/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:45:10 [ 496 ] DEBUG : http://localhost:None "GET /v1.42/containers/a3bfa449559ed2471c4ff686916c5b6ddbfaa166379df208018334c8782c0f28/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:45:10 [ 496 ] DEBUG : http://localhost:None "GET /v1.42/containers/a3bfa449559ed2471c4ff686916c5b6ddbfaa166379df208018334c8782c0f28/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:45:10 [ 496 ] DEBUG : http://localhost:None "GET /v1.42/containers/a3bfa449559ed2471c4ff686916c5b6ddbfaa166379df208018334c8782c0f28/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:45:11 [ 496 ] DEBUG : http://localhost:None "GET /v1.42/containers/a3bfa449559ed2471c4ff686916c5b6ddbfaa166379df208018334c8782c0f28/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:45:11 [ 496 ] DEBUG : http://localhost:None "GET /v1.42/containers/a3bfa449559ed2471c4ff686916c5b6ddbfaa166379df208018334c8782c0f28/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:45:11 [ 496 ] DEBUG : http://localhost:None "GET /v1.42/containers/a3bfa449559ed2471c4ff686916c5b6ddbfaa166379df208018334c8782c0f28/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:45:11 [ 496 ] DEBUG : http://localhost:None "GET /v1.42/containers/a3bfa449559ed2471c4ff686916c5b6ddbfaa166379df208018334c8782c0f28/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:45:11 [ 496 ] DEBUG : http://localhost:None "GET /v1.42/containers/a3bfa449559ed2471c4ff686916c5b6ddbfaa166379df208018334c8782c0f28/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:45:11 [ 496 ] DEBUG : http://localhost:None "GET /v1.42/containers/a3bfa449559ed2471c4ff686916c5b6ddbfaa166379df208018334c8782c0f28/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:45:11 [ 496 ] DEBUG : http://localhost:None "GET /v1.42/containers/a3bfa449559ed2471c4ff686916c5b6ddbfaa166379df208018334c8782c0f28/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:45:11 [ 496 ] DEBUG : http://localhost:None "GET /v1.42/containers/a3bfa449559ed2471c4ff686916c5b6ddbfaa166379df208018334c8782c0f28/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:45:11 [ 496 ] DEBUG : ClickHouse instance started (cluster.py:3021, start) 2025-06-08 17:45:11 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:45:12 [ 496 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_1" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_2" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_3" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_4" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_5" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 0, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 0, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 0, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 0, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 2, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 3, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 1, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 2, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 1, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 3, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 2, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 1, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 3, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 2, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 1, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 3, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 0, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 1, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 3, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 2, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 1, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 3, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 2, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 1, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 1, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 3, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 3, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 2, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 2, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 0, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 3, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 3, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 4, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 2, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 2, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 4, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 1, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 3, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 1, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 4, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 2, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 1, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 4, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 4, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 3, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 2, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 4, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 4, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 4, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 5, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 5, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 1, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 5, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 4, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 4, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 5, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 4, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 2, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 5, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 3, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 4, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 1, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 5, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 2, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 5, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 5, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 2, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 3, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 3, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 1, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 1, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 5, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 5, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 4, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 5, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 4, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 5, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 4, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 5, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 5, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 5, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; Checking table postgresql_replica_0 exists in test_database Checking table is synchronized: test_database.postgresql_replica_0 ------------------------------ Captured log call ------------------------------- 2025-06-08 17:45:12 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:45:12 [ 496 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:45:12 [ 496 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:45:17 [ 496 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_1/.env', '--project-name', 'roottestpostgresqlreplicadatabaseengine1', '--file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_1/instance/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml', 'pause', 'postgres1'] (cluster.py:113, run_and_check) 2025-06-08 17:45:17 [ 496 ] DEBUG : Stderr:Pausing roottestpostgresqlreplicadatabaseengine1_postgres1_1 ... (cluster.py:123, run_and_check) 2025-06-08 17:45:17 [ 496 ] DEBUG : Stderr:Pausing roottestpostgresqlreplicadatabaseengine1_postgres1_1 ... done (cluster.py:123, run_and_check) 2025-06-08 17:45:17 [ 496 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_1/.env', '--project-name', 'roottestpostgresqlreplicadatabaseengine1', '--file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_1/instance/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml', 'unpause', 'postgres1'] (cluster.py:113, run_and_check) 2025-06-08 17:45:17 [ 496 ] DEBUG : Stderr:Unpausing roottestpostgresqlreplicadatabaseengine1_postgres1_1 ... (cluster.py:123, run_and_check) 2025-06-08 17:45:17 [ 496 ] DEBUG : Stderr:Unpausing roottestpostgresqlreplicadatabaseengine1_postgres1_1 ... done (cluster.py:123, run_and_check) 2025-06-08 17:45:17 [ 496 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:45:18 [ 496 ] DEBUG : Executing query select * from `postgres_database`.`postgresql_replica_0` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:45:18 [ 496 ] DEBUG : Executing query select * from `test_database.postgresql_replica_0` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:45:19 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:45:20 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:45:20 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:45:20 [ 496 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) ______________ test_abrupt_server_restart_while_heavy_replication ______________ started_cluster = def test_abrupt_server_restart_while_heavy_replication(started_cluster): def transaction(thread_id): if thread_id % 2: conn = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=True, auto_commit=True, ) else: conn = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=True, auto_commit=False, ) cursor = conn.cursor() for query in queries: cursor.execute(query.format(thread_id)) print("thread {}, query {}".format(thread_id, query)) if thread_id % 2 == 0: conn.commit() NUM_TABLES = 6 pg_manager.create_and_fill_postgres_tables(tables_num=NUM_TABLES, numbers=0) threads = [] threads_num = 6 for i in range(threads_num): threads.append(threading.Thread(target=transaction, args=(i,))) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) for thread in threads: time.sleep(random.uniform(0, 0.5)) thread.start() for thread in threads: thread.join() # Join here because it takes time for data to reach wal instance.restart_clickhouse() > check_several_tables_are_synchronized(instance, NUM_TABLES) test_postgresql_replica_database_engine_1/test.py:820: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:419: in check_several_tables_are_synchronized check_tables_are_synchronized( helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.3.3:9000. DB::Exception: Unknown table expression identifier 'test_database.postgresql_replica_0' in scope SELECT * FROM `test_database.postgresql_replica_0` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000001d2e15a3 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000eff24b4 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x0000000007d363fc E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x00000000173a87a6 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x00000000173a162f E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000001739ca9a E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x0000000017862e10 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000001785f7e0 E 8. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:701: std::unique_ptr> std::__function::__policy_invoker> (DB::InterpreterFactory::Arguments const&)>::__call_impl> (DB::InterpreterFactory::Arguments const&)>>(std::__function::__policy_storage const*, DB::InterpreterFactory::Arguments const&) @ 0x0000000017864110 E 9. ./build_docker/./src/Interpreters/InterpreterFactory.cpp:0: DB::InterpreterFactory::get(std::shared_ptr&, std::shared_ptr, DB::SelectQueryOptions const&) @ 0x00000000177cfb6b E 10. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:302: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x0000000017dbf13f E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x0000000017dbac38 E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x00000000195edc92 E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x000000001960c2e8 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000001d16e0a3 E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000001d16e912 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:202: Poco::PooledThread::run() @ 0x000000001d3705c7 E 17. ./build_docker/./base/poco/Foundation/src/Thread.cpp:46: Poco::(anonymous namespace)::RunnableHolder::run() @ 0x000000001d36e890 E 18. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000001d36cd4a E 19. __tsan_thread_start_func @ 0x000000000706b0cf E 20. ? @ 0x00007f58a6845ac3 E 21. ? @ 0x00007f58a68d7850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.postgresql_replica_0` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_1" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_2" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_3" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_4" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_5" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 0, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 1, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 0, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 1, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 1, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0thread 0, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 1, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 1, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 0, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 0, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 1, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 1, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 1, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 0, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 1, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 1, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 0, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 1, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 1, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 2, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 2, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 2, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 4, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 3, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 2, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 4, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 4, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 3, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 2, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 4, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 3, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 2, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 2, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 2, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 5, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 4, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 5, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 3, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 5, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 1, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 5, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 3, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 4, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 0, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 5, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 4, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 4, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 3, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 3, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 3, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 5, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 2, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 5, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 5, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 2, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 1, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 1, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 2, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 0, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 4, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 4, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 4, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 3, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 2, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 3, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 5, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 5, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 5, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 4, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 3, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 5, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 3, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 2, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 4, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 5, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 2, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 2, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 3, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 4, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 4, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 5, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 5, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 3, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 3, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; Checking table postgresql_replica_0 exists in test_database Checking table is synchronized: test_database.postgresql_replica_0 ------------------------------ Captured log call ------------------------------- 2025-06-08 17:45:21 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:45:21 [ 496 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:45:21 [ 496 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:45:23 [ 496 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] (cluster.py:2046, exec_in_container) 2025-06-08 17:45:23 [ 496 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', 'ps -C clickhouse'] (cluster.py:113, run_and_check) 2025-06-08 17:45:23 [ 496 ] DEBUG : Stdout: PID TTY TIME CMD (cluster.py:121, run_and_check) 2025-06-08 17:45:23 [ 496 ] DEBUG : Stdout: 8 ? 00:00:10 clickhouse (cluster.py:121, run_and_check) 2025-06-08 17:45:23 [ 496 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] (cluster.py:2046, exec_in_container) 2025-06-08 17:45:23 [ 496 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', 'pkill clickhouse'] (cluster.py:113, run_and_check) 2025-06-08 17:45:23 [ 496 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2046, exec_in_container) 2025-06-08 17:45:23 [ 496 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2025-06-08 17:45:24 [ 496 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2025-06-08 17:45:25 [ 496 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2046, exec_in_container) 2025-06-08 17:45:25 [ 496 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2025-06-08 17:45:25 [ 496 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2025-06-08 17:45:26 [ 496 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2046, exec_in_container) 2025-06-08 17:45:26 [ 496 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2025-06-08 17:45:26 [ 496 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2025-06-08 17:45:27 [ 496 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2046, exec_in_container) 2025-06-08 17:45:27 [ 496 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2025-06-08 17:45:27 [ 496 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2046, exec_in_container) 2025-06-08 17:45:27 [ 496 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2025-06-08 17:45:27 [ 496 ] DEBUG : No clickhouse process running. Start new one. (cluster.py:3817, start_clickhouse) 2025-06-08 17:45:27 [ 496 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', 'clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon'] (cluster.py:2046, exec_in_container) 2025-06-08 17:45:27 [ 496 ] DEBUG : Command:['docker', 'exec', '-u', '0', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', 'clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon'] (cluster.py:113, run_and_check) 2025-06-08 17:45:28 [ 496 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2046, exec_in_container) 2025-06-08 17:45:28 [ 496 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2025-06-08 17:45:28 [ 496 ] DEBUG : Stdout:765 (cluster.py:121, run_and_check) 2025-06-08 17:45:28 [ 496 ] DEBUG : Clickhouse process running. (cluster.py:3828, start_clickhouse) 2025-06-08 17:45:28 [ 496 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2046, exec_in_container) 2025-06-08 17:45:28 [ 496 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2025-06-08 17:45:28 [ 496 ] DEBUG : Stdout:765 (cluster.py:121, run_and_check) 2025-06-08 17:45:28 [ 496 ] DEBUG : Executing query select 20 on instance (cluster.py:3455, query) 2025-06-08 17:45:29 [ 496 ] DEBUG : Executing query select 20 on instance (cluster.py:3455, query) 2025-06-08 17:45:29 [ 496 ] DEBUG : Executing query select 20 on instance (cluster.py:3455, query) 2025-06-08 17:45:30 [ 496 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:45:30 [ 496 ] DEBUG : Executing query select * from `postgres_database`.`postgresql_replica_0` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:45:30 [ 496 ] DEBUG : Executing query select * from `test_database.postgresql_replica_0` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:45:31 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:45:31 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:45:31 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:45:32 [ 496 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) _____________________ test_changing_replica_identity_value _____________________ started_cluster = def test_changing_replica_identity_value(started_cluster): pg_manager.create_postgres_table("postgresql_replica") instance.query( "INSERT INTO postgres_database.postgresql_replica SELECT 50 + number, number from numbers(50)" ) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) instance.query( "INSERT INTO postgres_database.postgresql_replica SELECT 100 + number, number from numbers(50)" ) > check_tables_are_synchronized(instance, "postgresql_replica") test_postgresql_replica_database_engine_1/test.py:292: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.3.3:9000. DB::Exception: Unknown table expression identifier 'test_database.postgresql_replica' in scope SELECT * FROM `test_database.postgresql_replica` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000001d2e15a3 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000eff24b4 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x0000000007d363fc E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x00000000173a87a6 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x00000000173a162f E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000001739ca9a E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x0000000017862e10 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000001785f7e0 E 8. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:701: std::unique_ptr> std::__function::__policy_invoker> (DB::InterpreterFactory::Arguments const&)>::__call_impl> (DB::InterpreterFactory::Arguments const&)>>(std::__function::__policy_storage const*, DB::InterpreterFactory::Arguments const&) @ 0x0000000017864110 E 9. ./build_docker/./src/Interpreters/InterpreterFactory.cpp:0: DB::InterpreterFactory::get(std::shared_ptr&, std::shared_ptr, DB::SelectQueryOptions const&) @ 0x00000000177cfb6b E 10. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:302: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x0000000017dbf13f E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x0000000017dbac38 E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x00000000195edc92 E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x000000001960c2e8 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000001d16e0a3 E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000001d16e912 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:202: Poco::PooledThread::run() @ 0x000000001d3705c7 E 17. ./build_docker/./base/poco/Foundation/src/Thread.cpp:46: Poco::(anonymous namespace)::RunnableHolder::run() @ 0x000000001d36e890 E 18. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000001d36cd4a E 19. __tsan_thread_start_func @ 0x000000000706b0cf E 20. ? @ 0x00007f58a6845ac3 E 21. ? @ 0x00007f58a68d7850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.postgresql_replica` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Checking table postgresql_replica exists in test_database Checking table is synchronized: test_database.postgresql_replica ------------------------------ Captured log call ------------------------------- 2025-06-08 17:45:32 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:45:32 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:45:32 [ 496 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:45:32 [ 496 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:45:32 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica SELECT 100 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:45:33 [ 496 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:45:33 [ 496 ] DEBUG : Executing query select * from `postgres_database`.`postgresql_replica` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:45:33 [ 496 ] DEBUG : Executing query select * from `test_database.postgresql_replica` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:45:33 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:45:33 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:45:34 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:45:34 [ 496 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) ___________________________ test_clickhouse_restart ____________________________ started_cluster = def test_clickhouse_restart(started_cluster): NUM_TABLES = 5 pg_manager.create_and_fill_postgres_tables(NUM_TABLES) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) > check_several_tables_are_synchronized(instance, NUM_TABLES) test_postgresql_replica_database_engine_1/test.py:303: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:419: in check_several_tables_are_synchronized check_tables_are_synchronized( helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.3.3:9000. DB::Exception: Unknown table expression identifier 'test_database.postgresql_replica_0' in scope SELECT * FROM `test_database.postgresql_replica_0` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000001d2e15a3 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000eff24b4 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x0000000007d363fc E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x00000000173a87a6 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x00000000173a162f E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000001739ca9a E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x0000000017862e10 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000001785f7e0 E 8. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:701: std::unique_ptr> std::__function::__policy_invoker> (DB::InterpreterFactory::Arguments const&)>::__call_impl> (DB::InterpreterFactory::Arguments const&)>>(std::__function::__policy_storage const*, DB::InterpreterFactory::Arguments const&) @ 0x0000000017864110 E 9. ./build_docker/./src/Interpreters/InterpreterFactory.cpp:0: DB::InterpreterFactory::get(std::shared_ptr&, std::shared_ptr, DB::SelectQueryOptions const&) @ 0x00000000177cfb6b E 10. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:302: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x0000000017dbf13f E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x0000000017dbac38 E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x00000000195edc92 E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x000000001960c2e8 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000001d16e0a3 E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000001d16e912 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:202: Poco::PooledThread::run() @ 0x000000001d3705c7 E 17. ./build_docker/./base/poco/Foundation/src/Thread.cpp:46: Poco::(anonymous namespace)::RunnableHolder::run() @ 0x000000001d36e890 E 18. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000001d36cd4a E 19. __tsan_thread_start_func @ 0x000000000706b0cf E 20. ? @ 0x00007f58a6845ac3 E 21. ? @ 0x00007f58a68d7850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.postgresql_replica_0` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_1" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_2" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_3" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_4" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Checking table postgresql_replica_0 exists in test_database Checking table is synchronized: test_database.postgresql_replica_0 ------------------------------ Captured log call ------------------------------- 2025-06-08 17:45:34 [ 496 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_0` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:45:34 [ 496 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_1` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:45:34 [ 496 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_2` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:45:35 [ 496 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_3` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:45:35 [ 496 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_4` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:45:35 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:45:35 [ 496 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:45:35 [ 496 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:45:36 [ 496 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:45:36 [ 496 ] DEBUG : Executing query select * from `postgres_database`.`postgresql_replica_0` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:45:36 [ 496 ] DEBUG : Executing query select * from `test_database.postgresql_replica_0` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:45:36 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:45:36 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:45:37 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:45:37 [ 496 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) _________________________ test_concurrent_transactions _________________________ started_cluster = def test_concurrent_transactions(started_cluster): def transaction(thread_id): conn = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=True, auto_commit=False, ) cursor = conn.cursor() for query in queries: cursor.execute(query.format(thread_id)) print("thread {}, query {}".format(thread_id, query)) conn.commit() NUM_TABLES = 6 pg_manager.create_and_fill_postgres_tables(NUM_TABLES, numbers=0) threads = [] threads_num = 6 for i in range(threads_num): threads.append(threading.Thread(target=transaction, args=(i,))) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) for thread in threads: time.sleep(random.uniform(0, 0.5)) thread.start() for thread in threads: thread.join() for i in range(NUM_TABLES): > check_tables_are_synchronized(instance, f"postgresql_replica_{i}") test_postgresql_replica_database_engine_1/test.py:691: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.3.3:9000. DB::Exception: Unknown table expression identifier 'test_database.postgresql_replica_0' in scope SELECT * FROM `test_database.postgresql_replica_0` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000001d2e15a3 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000eff24b4 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x0000000007d363fc E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x00000000173a87a6 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x00000000173a162f E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000001739ca9a E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x0000000017862e10 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000001785f7e0 E 8. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:701: std::unique_ptr> std::__function::__policy_invoker> (DB::InterpreterFactory::Arguments const&)>::__call_impl> (DB::InterpreterFactory::Arguments const&)>>(std::__function::__policy_storage const*, DB::InterpreterFactory::Arguments const&) @ 0x0000000017864110 E 9. ./build_docker/./src/Interpreters/InterpreterFactory.cpp:0: DB::InterpreterFactory::get(std::shared_ptr&, std::shared_ptr, DB::SelectQueryOptions const&) @ 0x00000000177cfb6b E 10. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:302: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x0000000017dbf13f E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x0000000017dbac38 E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x00000000195edc92 E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x000000001960c2e8 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000001d16e0a3 E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000001d16e912 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:202: Poco::PooledThread::run() @ 0x000000001d3705c7 E 17. ./build_docker/./base/poco/Foundation/src/Thread.cpp:46: Poco::(anonymous namespace)::RunnableHolder::run() @ 0x000000001d36e890 E 18. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000001d36cd4a E 19. __tsan_thread_start_func @ 0x000000000706b0cf E 20. ? @ 0x00007f58a6845ac3 E 21. ? @ 0x00007f58a68d7850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.postgresql_replica_0` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_1" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_2" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_3" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_4" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_5" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 0, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 0, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 0, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 1, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 1, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 1, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 1, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 1, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 0, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 1, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 1, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 1, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 0, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 1, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 1, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 0, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 1, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 1, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 2, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 2, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 2, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 0, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 2, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 2, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 2, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 2, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 2, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 0, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 1, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 2, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 2, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 1, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 3, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 2, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 1, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 3, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 3, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 3, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 3, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 2, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 3, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 3, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 3, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 3, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 3, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 2, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 3, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 4, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 2, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 3, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 4, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 2, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 4, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 4, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 4, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 4, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 4, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 4, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 3, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 4, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 4, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 4, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 3, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 3, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 5, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 4, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 5, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 5, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 5, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 5, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 5, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 5, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 5, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 4, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 5, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 5, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 5, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 4, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 4, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 5, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 5, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 5, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 5, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; Checking table postgresql_replica_0 exists in test_database Checking table is synchronized: test_database.postgresql_replica_0 ------------------------------ Captured log call ------------------------------- 2025-06-08 17:45:37 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:45:37 [ 496 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:45:37 [ 496 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:45:41 [ 496 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:45:41 [ 496 ] DEBUG : Executing query select * from `postgres_database`.`postgresql_replica_0` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:45:41 [ 496 ] DEBUG : Executing query select * from `test_database.postgresql_replica_0` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:45:42 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:45:44 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:45:45 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:45:45 [ 496 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) __________________________ test_different_data_types ___________________________ started_cluster = def test_different_data_types(started_cluster): conn = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=True, ) cursor = conn.cursor() cursor.execute("drop table if exists test_data_types;") cursor.execute("drop table if exists test_array_data_type;") cursor.execute( """CREATE TABLE test_data_types ( id integer PRIMARY KEY, a smallint, b integer, c bigint, d real, e double precision, f serial, g bigserial, h timestamp, i date, j decimal(5, 5), k numeric(5, 5))""" ) cursor.execute( """CREATE TABLE test_array_data_type ( key Integer NOT NULL PRIMARY KEY, a Date[] NOT NULL, -- Date b Timestamp[] NOT NULL, -- DateTime64(6) c real[][] NOT NULL, -- Float32 d double precision[][] NOT NULL, -- Float64 e decimal(5, 5)[][][] NOT NULL, -- Decimal32 f integer[][][] NOT NULL, -- Int32 g Text[][][][][] NOT NULL, -- String h Integer[][][], -- Nullable(Int32) i Char(2)[][][][], -- Nullable(String) k Char(2)[] -- Nullable(String) )""" ) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) for i in range(10): instance.query( """ INSERT INTO postgres_database.test_data_types VALUES ({}, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2)""".format( i ) ) > check_tables_are_synchronized(instance, "test_data_types", "id") test_postgresql_replica_database_engine_1/test.py:170: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.3.3:9000. DB::Exception: Unknown table expression identifier 'test_database.test_data_types' in scope SELECT * FROM `test_database.test_data_types` ORDER BY id ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000001d2e15a3 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000eff24b4 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x0000000007d363fc E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x00000000173a87a6 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x00000000173a162f E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000001739ca9a E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x0000000017862e10 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000001785f7e0 E 8. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:701: std::unique_ptr> std::__function::__policy_invoker> (DB::InterpreterFactory::Arguments const&)>::__call_impl> (DB::InterpreterFactory::Arguments const&)>>(std::__function::__policy_storage const*, DB::InterpreterFactory::Arguments const&) @ 0x0000000017864110 E 9. ./build_docker/./src/Interpreters/InterpreterFactory.cpp:0: DB::InterpreterFactory::get(std::shared_ptr&, std::shared_ptr, DB::SelectQueryOptions const&) @ 0x00000000177cfb6b E 10. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:302: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x0000000017dbf13f E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x0000000017dbac38 E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x00000000195edc92 E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x000000001960c2e8 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000001d16e0a3 E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000001d16e912 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:202: Poco::PooledThread::run() @ 0x000000001d3705c7 E 17. ./build_docker/./base/poco/Foundation/src/Thread.cpp:46: Poco::(anonymous namespace)::RunnableHolder::run() @ 0x000000001d36e890 E 18. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000001d36cd4a E 19. __tsan_thread_start_func @ 0x000000000706b0cf E 20. ? @ 0x00007f58a6845ac3 E 21. ? @ 0x00007f58a68d7850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.test_data_types` order by id;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Checking table test_data_types exists in test_database Checking table is synchronized: test_database.test_data_types ------------------------------ Captured log call ------------------------------- 2025-06-08 17:45:45 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:45:45 [ 496 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:45:45 [ 496 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:45:45 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.test_data_types VALUES (0, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2) on instance (cluster.py:3455, query) 2025-06-08 17:45:46 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.test_data_types VALUES (1, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2) on instance (cluster.py:3455, query) 2025-06-08 17:45:46 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.test_data_types VALUES (2, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2) on instance (cluster.py:3455, query) 2025-06-08 17:45:46 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.test_data_types VALUES (3, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2) on instance (cluster.py:3455, query) 2025-06-08 17:45:46 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.test_data_types VALUES (4, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2) on instance (cluster.py:3455, query) 2025-06-08 17:45:47 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.test_data_types VALUES (5, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2) on instance (cluster.py:3455, query) 2025-06-08 17:45:47 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.test_data_types VALUES (6, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2) on instance (cluster.py:3455, query) 2025-06-08 17:45:47 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.test_data_types VALUES (7, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2) on instance (cluster.py:3455, query) 2025-06-08 17:45:47 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.test_data_types VALUES (8, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2) on instance (cluster.py:3455, query) 2025-06-08 17:45:47 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.test_data_types VALUES (9, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2) on instance (cluster.py:3455, query) 2025-06-08 17:45:48 [ 496 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:45:48 [ 496 ] DEBUG : Executing query select * from `postgres_database`.`test_data_types` order by id; on instance (cluster.py:3455, query) 2025-06-08 17:45:48 [ 496 ] DEBUG : Executing query select * from `test_database.test_data_types` order by id; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:45:48 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:45:49 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:45:49 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:45:49 [ 496 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) ____________________ test_load_and_sync_all_database_tables ____________________ started_cluster = def test_load_and_sync_all_database_tables(started_cluster): NUM_TABLES = 5 pg_manager.create_and_fill_postgres_tables(NUM_TABLES) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) > check_several_tables_are_synchronized(instance, NUM_TABLES) test_postgresql_replica_database_engine_1/test.py:74: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:419: in check_several_tables_are_synchronized check_tables_are_synchronized( helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.3.3:9000. DB::Exception: Unknown table expression identifier 'test_database.postgresql_replica_0' in scope SELECT * FROM `test_database.postgresql_replica_0` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000001d2e15a3 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000eff24b4 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x0000000007d363fc E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x00000000173a87a6 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x00000000173a162f E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000001739ca9a E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x0000000017862e10 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000001785f7e0 E 8. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:701: std::unique_ptr> std::__function::__policy_invoker> (DB::InterpreterFactory::Arguments const&)>::__call_impl> (DB::InterpreterFactory::Arguments const&)>>(std::__function::__policy_storage const*, DB::InterpreterFactory::Arguments const&) @ 0x0000000017864110 E 9. ./build_docker/./src/Interpreters/InterpreterFactory.cpp:0: DB::InterpreterFactory::get(std::shared_ptr&, std::shared_ptr, DB::SelectQueryOptions const&) @ 0x00000000177cfb6b E 10. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:302: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x0000000017dbf13f E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x0000000017dbac38 E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x00000000195edc92 E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x000000001960c2e8 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000001d16e0a3 E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000001d16e912 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:202: Poco::PooledThread::run() @ 0x000000001d3705c7 E 17. ./build_docker/./base/poco/Foundation/src/Thread.cpp:46: Poco::(anonymous namespace)::RunnableHolder::run() @ 0x000000001d36e890 E 18. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000001d36cd4a E 19. __tsan_thread_start_func @ 0x000000000706b0cf E 20. ? @ 0x00007f58a6845ac3 E 21. ? @ 0x00007f58a68d7850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.postgresql_replica_0` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_1" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_2" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_3" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_4" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Checking table postgresql_replica_0 exists in test_database Checking table is synchronized: test_database.postgresql_replica_0 ------------------------------ Captured log call ------------------------------- 2025-06-08 17:45:49 [ 496 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_0` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:45:49 [ 496 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_1` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:45:50 [ 496 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_2` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:45:50 [ 496 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_3` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:45:50 [ 496 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_4` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:45:50 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:45:50 [ 496 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:45:51 [ 496 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:45:51 [ 496 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:45:51 [ 496 ] DEBUG : Executing query select * from `postgres_database`.`postgresql_replica_0` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:45:51 [ 496 ] DEBUG : Executing query select * from `test_database.postgresql_replica_0` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:45:52 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:45:52 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:45:52 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:45:52 [ 496 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) _________________ test_load_and_sync_subset_of_database_tables _________________ started_cluster = def test_load_and_sync_subset_of_database_tables(started_cluster): NUM_TABLES = 10 pg_manager.create_and_fill_postgres_tables(NUM_TABLES) publication_tables = "" for i in range(NUM_TABLES): if i < int(NUM_TABLES / 2): if publication_tables != "": publication_tables += ", " publication_tables += f"postgresql_replica_{i}" pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, settings=[ "materialized_postgresql_tables_list = '{}'".format(publication_tables) ], ) time.sleep(1) for i in range(int(NUM_TABLES / 2)): table_name = f"postgresql_replica_{i}" assert_nested_table_is_created(instance, table_name) result = instance.query( """SELECT count() FROM system.tables WHERE database = 'test_database';""" ) assert int(result) == int(NUM_TABLES / 2) database_tables = instance.query("SHOW TABLES FROM test_database") for i in range(NUM_TABLES): table_name = "postgresql_replica_{}".format(i) if i < int(NUM_TABLES / 2): assert table_name in database_tables else: assert table_name not in database_tables instance.query( "INSERT INTO postgres_database.{} SELECT 50 + number, {} from numbers(100)".format( table_name, i ) ) for i in range(NUM_TABLES): table_name = f"postgresql_replica_{i}" if i < int(NUM_TABLES / 2): > check_tables_are_synchronized(instance, table_name) test_postgresql_replica_database_engine_1/test.py:276: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.3.3:9000. DB::Exception: Unknown table expression identifier 'test_database.postgresql_replica_0' in scope SELECT * FROM `test_database.postgresql_replica_0` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000001d2e15a3 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000eff24b4 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x0000000007d363fc E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x00000000173a87a6 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x00000000173a162f E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000001739ca9a E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x0000000017862e10 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000001785f7e0 E 8. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:701: std::unique_ptr> std::__function::__policy_invoker> (DB::InterpreterFactory::Arguments const&)>::__call_impl> (DB::InterpreterFactory::Arguments const&)>>(std::__function::__policy_storage const*, DB::InterpreterFactory::Arguments const&) @ 0x0000000017864110 E 9. ./build_docker/./src/Interpreters/InterpreterFactory.cpp:0: DB::InterpreterFactory::get(std::shared_ptr&, std::shared_ptr, DB::SelectQueryOptions const&) @ 0x00000000177cfb6b E 10. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:302: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x0000000017dbf13f E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x0000000017dbac38 E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x00000000195edc92 E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x000000001960c2e8 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000001d16e0a3 E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000001d16e912 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:202: Poco::PooledThread::run() @ 0x000000001d3705c7 E 17. ./build_docker/./base/poco/Foundation/src/Thread.cpp:46: Poco::(anonymous namespace)::RunnableHolder::run() @ 0x000000001d36e890 E 18. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000001d36cd4a E 19. __tsan_thread_start_func @ 0x000000000706b0cf E 20. ? @ 0x00007f58a6845ac3 E 21. ? @ 0x00007f58a68d7850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.postgresql_replica_0` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_1" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_2" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_3" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_4" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_5" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_6" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_7" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_8" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_9" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Checking table postgresql_replica_0 exists in test_database Checking table postgresql_replica_1 exists in test_database Checking table postgresql_replica_2 exists in test_database Checking table postgresql_replica_3 exists in test_database Checking table postgresql_replica_4 exists in test_database Checking table postgresql_replica_0 exists in test_database Checking table is synchronized: test_database.postgresql_replica_0 ------------------------------ Captured log call ------------------------------- 2025-06-08 17:45:52 [ 496 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_0` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:45:52 [ 496 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_1` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:45:53 [ 496 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_2` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:45:53 [ 496 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_3` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:45:53 [ 496 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_4` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:45:53 [ 496 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_5` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:45:54 [ 496 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_6` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:45:54 [ 496 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_7` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:45:54 [ 496 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_8` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:45:54 [ 496 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_9` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:45:54 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:45:55 [ 496 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') SETTINGS materialized_postgresql_tables_list = 'postgresql_replica_0, postgresql_replica_1, postgresql_replica_2, postgresql_replica_3, postgresql_replica_4' on instance (cluster.py:3455, query) 2025-06-08 17:45:55 [ 496 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:45:56 [ 496 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:45:56 [ 496 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:45:56 [ 496 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:45:57 [ 496 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:45:57 [ 496 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:45:57 [ 496 ] DEBUG : Executing query SELECT count() FROM system.tables WHERE database = 'test_database'; on instance (cluster.py:3455, query) 2025-06-08 17:45:57 [ 496 ] DEBUG : Executing query SHOW TABLES FROM test_database on instance (cluster.py:3455, query) 2025-06-08 17:45:57 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_0 SELECT 50 + number, 0 from numbers(100) on instance (cluster.py:3455, query) 2025-06-08 17:45:58 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_1 SELECT 50 + number, 1 from numbers(100) on instance (cluster.py:3455, query) 2025-06-08 17:45:58 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_2 SELECT 50 + number, 2 from numbers(100) on instance (cluster.py:3455, query) 2025-06-08 17:45:58 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_3 SELECT 50 + number, 3 from numbers(100) on instance (cluster.py:3455, query) 2025-06-08 17:45:58 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_4 SELECT 50 + number, 4 from numbers(100) on instance (cluster.py:3455, query) 2025-06-08 17:45:59 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_5 SELECT 50 + number, 5 from numbers(100) on instance (cluster.py:3455, query) 2025-06-08 17:45:59 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_6 SELECT 50 + number, 6 from numbers(100) on instance (cluster.py:3455, query) 2025-06-08 17:45:59 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_7 SELECT 50 + number, 7 from numbers(100) on instance (cluster.py:3455, query) 2025-06-08 17:45:59 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_8 SELECT 50 + number, 8 from numbers(100) on instance (cluster.py:3455, query) 2025-06-08 17:45:59 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_9 SELECT 50 + number, 9 from numbers(100) on instance (cluster.py:3455, query) 2025-06-08 17:46:00 [ 496 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:46:00 [ 496 ] DEBUG : Executing query select * from `postgres_database`.`postgresql_replica_0` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:46:00 [ 496 ] DEBUG : Executing query select * from `test_database.postgresql_replica_0` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:46:00 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:46:00 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:46:01 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:46:01 [ 496 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) _________________________ test_many_concurrent_queries _________________________ started_cluster = def test_many_concurrent_queries(started_cluster): table = "test_many_conc" query_pool = [ "DELETE FROM {} WHERE (value*value) % 3 = 0;", "UPDATE {} SET value = value - 125 WHERE key % 2 = 0;", "DELETE FROM {} WHERE key % 10 = 0;", "UPDATE {} SET value = value*5 WHERE key % 2 = 1;", "DELETE FROM {} WHERE value % 2 = 0;", "UPDATE {} SET value = value + 2000 WHERE key % 5 = 0;", "DELETE FROM {} WHERE value % 3 = 0;", "UPDATE {} SET value = value * 2 WHERE key % 3 = 0;", "DELETE FROM {} WHERE value % 9 = 2;", "UPDATE {} SET value = value + 2 WHERE key % 3 = 1;", "DELETE FROM {} WHERE value%5 = 0;", ] NUM_TABLES = 5 conn = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=True, ) cursor = conn.cursor() pg_manager.create_and_fill_postgres_tables( NUM_TABLES, numbers=10000, table_name_base=table ) def attack(thread_id): print("thread {}".format(thread_id)) k = 10000 for i in range(20): query_id = random.randrange(0, len(query_pool) - 1) table_id = random.randrange(0, 5) # num tables random_table_name = f"{table}_{table_id}" table_name = f"{table}_{thread_id}" # random update / delete query cursor.execute(query_pool[query_id].format(random_table_name)) print( "Executing for table {} query: {}".format( random_table_name, query_pool[query_id] ) ) # allow some thread to do inserts (not to violate key constraints) if thread_id < 5: print("try insert table {}".format(thread_id)) instance.query( "INSERT INTO postgres_database.{} SELECT {}*10000*({} + number), number from numbers(1000)".format( table_name, thread_id, k ) ) k += 1 print("insert table {} ok".format(thread_id)) if i == 5: # also change primary key value print("try update primary key {}".format(thread_id)) cursor.execute( "UPDATE {table}_{} SET key=key%100000+100000*{} WHERE key%{}=0".format( table_name, i + 1, i + 1 ) ) print("update primary key {} ok".format(thread_id)) n = [10000] threads = [] threads_num = 16 for i in range(threads_num): threads.append(threading.Thread(target=attack, args=(i,))) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) for thread in threads: time.sleep(random.uniform(0, 1)) thread.start() n[0] = 50000 for table_id in range(NUM_TABLES): n[0] += 1 table_name = f"{table}_{table_id}" instance.query( "INSERT INTO postgres_database.{} SELECT {} + number, number from numbers(5000)".format( table_name, n[0] ) ) # cursor.execute("UPDATE {table}_{} SET key=key%100000+100000*{} WHERE key%{}=0".format(table_id, table_id+1, table_id+1)) for thread in threads: thread.join() for i in range(NUM_TABLES): table_name = f"{table}_{i}" > check_tables_are_synchronized(instance, table_name) test_postgresql_replica_database_engine_1/test.py:492: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.3.3:9000. DB::Exception: Unknown table expression identifier 'test_database.test_many_conc_0' in scope SELECT * FROM `test_database.test_many_conc_0` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000001d2e15a3 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000eff24b4 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x0000000007d363fc E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x00000000173a87a6 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x00000000173a162f E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000001739ca9a E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x0000000017862e10 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000001785f7e0 E 8. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:701: std::unique_ptr> std::__function::__policy_invoker> (DB::InterpreterFactory::Arguments const&)>::__call_impl> (DB::InterpreterFactory::Arguments const&)>>(std::__function::__policy_storage const*, DB::InterpreterFactory::Arguments const&) @ 0x0000000017864110 E 9. ./build_docker/./src/Interpreters/InterpreterFactory.cpp:0: DB::InterpreterFactory::get(std::shared_ptr&, std::shared_ptr, DB::SelectQueryOptions const&) @ 0x00000000177cfb6b E 10. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:302: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x0000000017dbf13f E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x0000000017dbac38 E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x00000000195edc92 E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x000000001960c2e8 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000001d16e0a3 E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000001d16e912 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:202: Poco::PooledThread::run() @ 0x000000001d3705c7 E 17. ./build_docker/./base/poco/Foundation/src/Thread.cpp:46: Poco::(anonymous namespace)::RunnableHolder::run() @ 0x000000001d36e890 E 18. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000001d36cd4a E 19. __tsan_thread_start_func @ 0x000000000706b0cf E 20. ? @ 0x00007f58a6845ac3 E 21. ? @ 0x00007f58a68d7850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.test_many_conc_0` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "test_many_conc_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "test_many_conc_1" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "test_many_conc_2" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "test_many_conc_3" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "test_many_conc_4" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) thread 0 Executing for table test_many_conc_3 query: DELETE FROM {} WHERE value % 9 = 2; try insert table 0 thread 1 Executing for table test_many_conc_2 query: DELETE FROM {} WHERE value % 3 = 0; try insert table 1 insert table 1 ok Executing for table test_many_conc_4 query: DELETE FROM {} WHERE (value*value) % 3 = 0; try insert table 1 thread 2 Executing for table test_many_conc_3 query: DELETE FROM {} WHERE value % 3 = 0; try insert table 2 insert table 2 ok Executing for table test_many_conc_2 query: DELETE FROM {} WHERE key % 10 = 0; try insert table 2 insert table 2 ok Executing for table test_many_conc_0 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; try insert table 2 thread 3 Executing for table test_many_conc_3 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; try insert table 3 insert table 3 ok Executing for table test_many_conc_2 query: DELETE FROM {} WHERE (value*value) % 3 = 0; try insert table 3 thread 4 Executing for table test_many_conc_2 query: DELETE FROM {} WHERE (value*value) % 3 = 0; try insert table 4 insert table 4 ok Executing for table test_many_conc_1 query: DELETE FROM {} WHERE value % 9 = 2; try insert table 4 thread 5 Executing for table test_many_conc_1 query: UPDATE {} SET value = value * 2 WHERE key % 3 = 0; Executing for table test_many_conc_2 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; Executing for table test_many_conc_4 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_4 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_0 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_1 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_3 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_2 query: UPDATE {} SET value = value*5 WHERE key % 2 = 1; Executing for table test_many_conc_1 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_2 query: UPDATE {} SET value = value*5 WHERE key % 2 = 1; Executing for table test_many_conc_3 query: DELETE FROM {} WHERE value % 3 = 0; Executing for table test_many_conc_4 query: DELETE FROM {} WHERE value % 2 = 0; Executing for table test_many_conc_3 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_3 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_3 query: UPDATE {} SET value = value*5 WHERE key % 2 = 1; Executing for table test_many_conc_2 query: DELETE FROM {} WHERE value % 2 = 0; Executing for table test_many_conc_1 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_0 query: UPDATE {} SET value = value*5 WHERE key % 2 = 1; Executing for table test_many_conc_1 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_0 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; thread 6 Executing for table test_many_conc_4 query: DELETE FROM {} WHERE (value*value) % 3 = 0; Executing for table test_many_conc_0 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_4 query: UPDATE {} SET value = value * 2 WHERE key % 3 = 0; thread 7 Executing for table test_many_conc_1 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_0 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_2 query: UPDATE {} SET value = value * 2 WHERE key % 3 = 0; Executing for table test_many_conc_4 query: DELETE FROM {} WHERE (value*value) % 3 = 0; Executing for table test_many_conc_2 query: UPDATE {} SET value = value * 2 WHERE key % 3 = 0; Executing for table test_many_conc_3 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_0 query: DELETE FROM {} WHERE value % 3 = 0; Executing for table test_many_conc_1 query: UPDATE {} SET value = value*5 WHERE key % 2 = 1; Executing for table test_many_conc_3 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; thread 8 thread 9 Executing for table test_many_conc_4 query: UPDATE {} SET value = value*5 WHERE key % 2 = 1; Executing for table test_many_conc_3 query: UPDATE {} SET value = value*5 WHERE key % 2 = 1; Executing for table test_many_conc_2 query: DELETE FROM {} WHERE value % 3 = 0; Executing for table test_many_conc_4 query: UPDATE {} SET value = value * 2 WHERE key % 3 = 0; Executing for table test_many_conc_2 query: UPDATE {} SET value = value*5 WHERE key % 2 = 1; Executing for table test_many_conc_2 query: UPDATE {} SET value = value * 2 WHERE key % 3 = 0; thread 10 Executing for table test_many_conc_4 query: UPDATE {} SET value = value*5 WHERE key % 2 = 1; Executing for table test_many_conc_0 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_0 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_3 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_3 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_3 query: DELETE FROM {} WHERE key % 10 = 0; thread 11 Executing for table test_many_conc_1 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; Executing for table test_many_conc_1 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_0 query: DELETE FROM {} WHERE value % 2 = 0; Executing for table test_many_conc_0 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; Executing for table test_many_conc_4 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_3 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_4 query: UPDATE {} SET value = value * 2 WHERE key % 3 = 0; thread 12 Executing for table test_many_conc_2 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_4 query: DELETE FROM {} WHERE value % 2 = 0; Executing for table test_many_conc_4 query: UPDATE {} SET value = value * 2 WHERE key % 3 = 0; Executing for table test_many_conc_4 query: UPDATE {} SET value = value * 2 WHERE key % 3 = 0; Executing for table test_many_conc_2 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_4 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_0 query: UPDATE {} SET value = value*5 WHERE key % 2 = 1; thread 13 Executing for table test_many_conc_4 query: DELETE FROM {} WHERE value % 3 = 0; Executing for table test_many_conc_4 query: UPDATE {} SET value = value*5 WHERE key % 2 = 1; Executing for table test_many_conc_3 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; Executing for table test_many_conc_4 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; Executing for table test_many_conc_3 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_4 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; Executing for table test_many_conc_2 query: DELETE FROM {} WHERE value % 3 = 0; Executing for table test_many_conc_1 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_0 query: DELETE FROM {} WHERE key % 10 = 0; thread 14 Executing for table test_many_conc_2 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_0 query: DELETE FROM {} WHERE value % 3 = 0; thread 15Executing for table test_many_conc_2 query: DELETE FROM {} WHERE value % 2 = 0; Executing for table test_many_conc_3 query: DELETE FROM {} WHERE value % 3 = 0; Executing for table test_many_conc_0 query: DELETE FROM {} WHERE value % 3 = 0; Executing for table test_many_conc_0 query: DELETE FROM {} WHERE value % 3 = 0; Executing for table test_many_conc_1 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_2 query: UPDATE {} SET value = value*5 WHERE key % 2 = 1; Executing for table test_many_conc_0 query: UPDATE {} SET value = value * 2 WHERE key % 3 = 0; Executing for table test_many_conc_3 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_3 query: DELETE FROM {} WHERE value % 3 = 0; Executing for table test_many_conc_1 query: UPDATE {} SET value = value * 2 WHERE key % 3 = 0; Executing for table test_many_conc_2 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_3 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_3 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_2 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_4 query: DELETE FROM {} WHERE value % 3 = 0; Executing for table test_many_conc_1 query: DELETE FROM {} WHERE key % 10 = 0; Checking table test_many_conc_0 exists in test_database Checking table is synchronized: test_database.test_many_conc_0 ------------------------------ Captured log call ------------------------------- 2025-06-08 17:46:01 [ 496 ] DEBUG : Executing query INSERT INTO `postgres_database`.`test_many_conc_0` SELECT number, number from numbers(10000) on instance (cluster.py:3455, query) 2025-06-08 17:46:01 [ 496 ] DEBUG : Executing query INSERT INTO `postgres_database`.`test_many_conc_1` SELECT number, number from numbers(10000) on instance (cluster.py:3455, query) 2025-06-08 17:46:02 [ 496 ] DEBUG : Executing query INSERT INTO `postgres_database`.`test_many_conc_2` SELECT number, number from numbers(10000) on instance (cluster.py:3455, query) 2025-06-08 17:46:02 [ 496 ] DEBUG : Executing query INSERT INTO `postgres_database`.`test_many_conc_3` SELECT number, number from numbers(10000) on instance (cluster.py:3455, query) 2025-06-08 17:46:02 [ 496 ] DEBUG : Executing query INSERT INTO `postgres_database`.`test_many_conc_4` SELECT number, number from numbers(10000) on instance (cluster.py:3455, query) 2025-06-08 17:46:03 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:46:03 [ 496 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:46:03 [ 496 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:46:04 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_0 SELECT 0*10000*(10000 + number), number from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:46:05 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_1 SELECT 1*10000*(10000 + number), number from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:46:05 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_1 SELECT 1*10000*(10001 + number), number from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:46:06 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_2 SELECT 2*10000*(10000 + number), number from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:46:06 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_2 SELECT 2*10000*(10001 + number), number from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:46:06 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_2 SELECT 2*10000*(10002 + number), number from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:46:06 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_3 SELECT 3*10000*(10000 + number), number from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:46:06 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_3 SELECT 3*10000*(10001 + number), number from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:46:07 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_4 SELECT 4*10000*(10000 + number), number from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:46:07 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_4 SELECT 4*10000*(10001 + number), number from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:46:13 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_0 SELECT 50001 + number, number from numbers(5000) on instance (cluster.py:3455, query) 2025-06-08 17:46:14 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_1 SELECT 50002 + number, number from numbers(5000) on instance (cluster.py:3455, query) 2025-06-08 17:46:14 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_2 SELECT 50003 + number, number from numbers(5000) on instance (cluster.py:3455, query) 2025-06-08 17:46:14 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_3 SELECT 50004 + number, number from numbers(5000) on instance (cluster.py:3455, query) 2025-06-08 17:46:15 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_4 SELECT 50005 + number, number from numbers(5000) on instance (cluster.py:3455, query) 2025-06-08 17:46:15 [ 496 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:46:15 [ 496 ] DEBUG : Executing query select * from `postgres_database`.`test_many_conc_0` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:46:15 [ 496 ] DEBUG : Executing query select * from `test_database.test_many_conc_0` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:46:16 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:46:16 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:46:16 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:46:16 [ 496 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) ___________________________ test_multiple_databases ____________________________ started_cluster = def test_multiple_databases(started_cluster): NUM_TABLES = 5 conn = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=False, ) pg_manager.create_postgres_db("postgres_database_1") pg_manager.create_postgres_db("postgres_database_2") conn1 = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=True, database_name="postgres_database_1", ) conn2 = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=True, database_name="postgres_database_2", ) cursor1 = conn1.cursor() cursor2 = conn2.cursor() pg_manager.create_clickhouse_postgres_db( "postgres_database_1", "", "postgres_database_1", ) pg_manager.create_clickhouse_postgres_db( "postgres_database_2", "", "postgres_database_2", ) cursors = [cursor1, cursor2] for cursor_id in range(len(cursors)): for i in range(NUM_TABLES): table_name = "postgresql_replica_{}".format(i) create_postgres_table(cursors[cursor_id], table_name) instance.query( "INSERT INTO postgres_database_{}.{} SELECT number, number from numbers(50)".format( cursor_id + 1, table_name ) ) print( "database 1 tables: ", instance.query( """SELECT name FROM system.tables WHERE database = 'postgres_database_1';""" ), ) print( "database 2 tables: ", instance.query( """SELECT name FROM system.tables WHERE database = 'postgres_database_2';""" ), ) pg_manager.create_materialized_db( started_cluster.postgres_ip, started_cluster.postgres_port, "test_database_1", "postgres_database_1", ) pg_manager.create_materialized_db( started_cluster.postgres_ip, started_cluster.postgres_port, "test_database_2", "postgres_database_2", ) cursors = [cursor1, cursor2] for cursor_id in range(len(cursors)): for i in range(NUM_TABLES): table_name = "postgresql_replica_{}".format(i) instance.query( "INSERT INTO postgres_database_{}.{} SELECT 50 + number, number from numbers(50)".format( cursor_id + 1, table_name ) ) for cursor_id in range(len(cursors)): for i in range(NUM_TABLES): table_name = "postgresql_replica_{}".format(i) > check_tables_are_synchronized( instance, table_name, "key", "postgres_database_{}".format(cursor_id + 1), "test_database_{}".format(cursor_id + 1), ) test_postgresql_replica_database_engine_1/test.py:648: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.3.3:9000. DB::Exception: Unknown table expression identifier 'test_database_1.postgresql_replica_0' in scope SELECT * FROM `test_database_1.postgresql_replica_0` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000001d2e15a3 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000eff24b4 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x0000000007d363fc E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x00000000173a87a6 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x00000000173a162f E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000001739ca9a E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x0000000017862e10 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000001785f7e0 E 8. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:701: std::unique_ptr> std::__function::__policy_invoker> (DB::InterpreterFactory::Arguments const&)>::__call_impl> (DB::InterpreterFactory::Arguments const&)>>(std::__function::__policy_storage const*, DB::InterpreterFactory::Arguments const&) @ 0x0000000017864110 E 9. ./build_docker/./src/Interpreters/InterpreterFactory.cpp:0: DB::InterpreterFactory::get(std::shared_ptr&, std::shared_ptr, DB::SelectQueryOptions const&) @ 0x00000000177cfb6b E 10. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:302: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x0000000017dbf13f E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x0000000017dbac38 E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x00000000195edc92 E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x000000001960c2e8 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000001d16e0a3 E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000001d16e912 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:202: Poco::PooledThread::run() @ 0x000000001d3705c7 E 17. ./build_docker/./base/poco/Foundation/src/Thread.cpp:46: Poco::(anonymous namespace)::RunnableHolder::run() @ 0x000000001d36e890 E 18. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000001d36cd4a E 19. __tsan_thread_start_func @ 0x000000000706b0cf E 20. ? @ 0x00007f58a6845ac3 E 21. ? @ 0x00007f58a68d7850 E . (UNKNOWN_TABLE) E (query: select * from `test_database_1.postgresql_replica_0` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_1" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_2" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_3" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_4" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_1" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_2" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_3" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_4" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) database 1 tables: postgresql_replica_0 postgresql_replica_1 postgresql_replica_2 postgresql_replica_3 postgresql_replica_4 database 2 tables: postgresql_replica_0 postgresql_replica_1 postgresql_replica_2 postgresql_replica_3 postgresql_replica_4 Checking table postgresql_replica_0 exists in test_database_1 Checking table is synchronized: test_database_1.postgresql_replica_0 ------------------------------ Captured log call ------------------------------- 2025-06-08 17:46:17 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database_1" on instance (cluster.py:3455, query) 2025-06-08 17:46:17 [ 496 ] DEBUG : Executing query CREATE DATABASE "postgres_database_1" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database_1', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:46:17 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database_2" on instance (cluster.py:3455, query) 2025-06-08 17:46:17 [ 496 ] DEBUG : Executing query CREATE DATABASE "postgres_database_2" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database_2', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:46:17 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database_1.postgresql_replica_0 SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:46:18 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database_1.postgresql_replica_1 SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:46:18 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database_1.postgresql_replica_2 SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:46:18 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database_1.postgresql_replica_3 SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:46:18 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database_1.postgresql_replica_4 SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:46:18 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database_2.postgresql_replica_0 SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:46:19 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database_2.postgresql_replica_1 SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:46:19 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database_2.postgresql_replica_2 SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:46:19 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database_2.postgresql_replica_3 SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:46:19 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database_2.postgresql_replica_4 SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:46:19 [ 496 ] DEBUG : Executing query SELECT name FROM system.tables WHERE database = 'postgres_database_1'; on instance (cluster.py:3455, query) 2025-06-08 17:46:20 [ 496 ] DEBUG : Executing query SELECT name FROM system.tables WHERE database = 'postgres_database_2'; on instance (cluster.py:3455, query) 2025-06-08 17:46:20 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database_1` on instance (cluster.py:3455, query) 2025-06-08 17:46:20 [ 496 ] DEBUG : Executing query CREATE DATABASE `test_database_1` ENGINE = MaterializedPostgreSQL('172.16.3.2:5432', 'postgres_database_1', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:46:20 [ 496 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:46:20 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database_2` on instance (cluster.py:3455, query) 2025-06-08 17:46:21 [ 496 ] DEBUG : Executing query CREATE DATABASE `test_database_2` ENGINE = MaterializedPostgreSQL('172.16.3.2:5432', 'postgres_database_2', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:46:21 [ 496 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:46:21 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database_1.postgresql_replica_0 SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:46:21 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database_1.postgresql_replica_1 SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:46:21 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database_1.postgresql_replica_2 SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:46:22 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database_1.postgresql_replica_3 SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:46:22 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database_1.postgresql_replica_4 SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:46:22 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database_2.postgresql_replica_0 SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:46:22 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database_2.postgresql_replica_1 SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:46:22 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database_2.postgresql_replica_2 SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:46:23 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database_2.postgresql_replica_3 SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:46:23 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database_2.postgresql_replica_4 SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:46:23 [ 496 ] DEBUG : Executing query SHOW TABLES FROM `test_database_1` on instance (cluster.py:3455, query) 2025-06-08 17:46:23 [ 496 ] DEBUG : Executing query select * from `postgres_database_1`.`postgresql_replica_0` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:46:24 [ 496 ] DEBUG : Executing query select * from `test_database_1.postgresql_replica_0` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:46:24 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database_1` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:46:24 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database_2` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:46:24 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:46:24 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database_1" on instance (cluster.py:3455, query) 2025-06-08 17:46:25 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database_2" on instance (cluster.py:3455, query) 2025-06-08 17:46:25 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:46:25 [ 496 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) ________________________________ test_quoting_1 ________________________________ started_cluster = def test_quoting_1(started_cluster): table_name = "user" pg_manager.create_and_fill_postgres_table(table_name) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) > check_tables_are_synchronized(instance, table_name) test_postgresql_replica_database_engine_1/test.py:829: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.3.3:9000. DB::Exception: Unknown table expression identifier 'test_database.user' in scope SELECT * FROM `test_database.user` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000001d2e15a3 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000eff24b4 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x0000000007d363fc E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x00000000173a87a6 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x00000000173a162f E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000001739ca9a E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x0000000017862e10 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000001785f7e0 E 8. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:701: std::unique_ptr> std::__function::__policy_invoker> (DB::InterpreterFactory::Arguments const&)>::__call_impl> (DB::InterpreterFactory::Arguments const&)>>(std::__function::__policy_storage const*, DB::InterpreterFactory::Arguments const&) @ 0x0000000017864110 E 9. ./build_docker/./src/Interpreters/InterpreterFactory.cpp:0: DB::InterpreterFactory::get(std::shared_ptr&, std::shared_ptr, DB::SelectQueryOptions const&) @ 0x00000000177cfb6b E 10. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:302: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x0000000017dbf13f E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x0000000017dbac38 E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x00000000195edc92 E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x000000001960c2e8 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000001d16e0a3 E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000001d16e912 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:202: Poco::PooledThread::run() @ 0x000000001d3705c7 E 17. ./build_docker/./base/poco/Foundation/src/Thread.cpp:46: Poco::(anonymous namespace)::RunnableHolder::run() @ 0x000000001d36e890 E 18. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000001d36cd4a E 19. __tsan_thread_start_func @ 0x000000000706b0cf E 20. ? @ 0x00007f58a6845ac3 E 21. ? @ 0x00007f58a68d7850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.user` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "user" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Checking table user exists in test_database Checking table is synchronized: test_database.user ------------------------------ Captured log call ------------------------------- 2025-06-08 17:46:25 [ 496 ] DEBUG : Executing query INSERT INTO `postgres_database`.`user` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:46:26 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:46:26 [ 496 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:46:26 [ 496 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:46:26 [ 496 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:46:26 [ 496 ] DEBUG : Executing query select * from `postgres_database`.`user` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:46:27 [ 496 ] DEBUG : Executing query select * from `test_database.user` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:46:27 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:46:27 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:46:27 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:46:27 [ 496 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) ________________________________ test_quoting_2 ________________________________ started_cluster = def test_quoting_2(started_cluster): table_name = "user" pg_manager.create_and_fill_postgres_table(table_name) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, settings=[f"materialized_postgresql_tables_list = '{table_name}'"], ) > check_tables_are_synchronized(instance, table_name) test_postgresql_replica_database_engine_1/test.py:840: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.3.3:9000. DB::Exception: Unknown table expression identifier 'test_database.user' in scope SELECT * FROM `test_database.user` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000001d2e15a3 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000eff24b4 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x0000000007d363fc E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x00000000173a87a6 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x00000000173a162f E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000001739ca9a E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x0000000017862e10 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000001785f7e0 E 8. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:701: std::unique_ptr> std::__function::__policy_invoker> (DB::InterpreterFactory::Arguments const&)>::__call_impl> (DB::InterpreterFactory::Arguments const&)>>(std::__function::__policy_storage const*, DB::InterpreterFactory::Arguments const&) @ 0x0000000017864110 E 9. ./build_docker/./src/Interpreters/InterpreterFactory.cpp:0: DB::InterpreterFactory::get(std::shared_ptr&, std::shared_ptr, DB::SelectQueryOptions const&) @ 0x00000000177cfb6b E 10. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:302: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x0000000017dbf13f E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x0000000017dbac38 E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x00000000195edc92 E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x000000001960c2e8 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000001d16e0a3 E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000001d16e912 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:202: Poco::PooledThread::run() @ 0x000000001d3705c7 E 17. ./build_docker/./base/poco/Foundation/src/Thread.cpp:46: Poco::(anonymous namespace)::RunnableHolder::run() @ 0x000000001d36e890 E 18. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000001d36cd4a E 19. __tsan_thread_start_func @ 0x000000000706b0cf E 20. ? @ 0x00007f58a6845ac3 E 21. ? @ 0x00007f58a68d7850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.user` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "user" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Checking table user exists in test_database Checking table is synchronized: test_database.user ------------------------------ Captured log call ------------------------------- 2025-06-08 17:46:27 [ 496 ] DEBUG : Executing query INSERT INTO `postgres_database`.`user` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:46:28 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:46:28 [ 496 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') SETTINGS materialized_postgresql_tables_list = 'user' on instance (cluster.py:3455, query) 2025-06-08 17:46:28 [ 496 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:46:28 [ 496 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:46:28 [ 496 ] DEBUG : Executing query select * from `postgres_database`.`user` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:46:29 [ 496 ] DEBUG : Executing query select * from `test_database.user` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:46:29 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:46:29 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:46:29 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:46:30 [ 496 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) _________________________ test_replica_identity_index __________________________ started_cluster = def test_replica_identity_index(started_cluster): pg_manager.create_postgres_table( "postgresql_replica", template=postgres_table_template_3 ) pg_manager.execute("CREATE unique INDEX idx on postgresql_replica(key1, key2);") pg_manager.execute( "ALTER TABLE postgresql_replica REPLICA IDENTITY USING INDEX idx" ) instance.query( "INSERT INTO postgres_database.postgresql_replica SELECT number, number, number, number from numbers(50, 10)" ) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) instance.query( "INSERT INTO postgres_database.postgresql_replica SELECT number, number, number, number from numbers(100, 10)" ) > check_tables_are_synchronized(instance, "postgresql_replica", order_by="key1") test_postgresql_replica_database_engine_1/test.py:334: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.3.3:9000. DB::Exception: Unknown table expression identifier 'test_database.postgresql_replica' in scope SELECT * FROM `test_database.postgresql_replica` ORDER BY key1 ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000001d2e15a3 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000eff24b4 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x0000000007d363fc E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x00000000173a87a6 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x00000000173a162f E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000001739ca9a E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x0000000017862e10 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000001785f7e0 E 8. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:701: std::unique_ptr> std::__function::__policy_invoker> (DB::InterpreterFactory::Arguments const&)>::__call_impl> (DB::InterpreterFactory::Arguments const&)>>(std::__function::__policy_storage const*, DB::InterpreterFactory::Arguments const&) @ 0x0000000017864110 E 9. ./build_docker/./src/Interpreters/InterpreterFactory.cpp:0: DB::InterpreterFactory::get(std::shared_ptr&, std::shared_ptr, DB::SelectQueryOptions const&) @ 0x00000000177cfb6b E 10. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:302: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x0000000017dbf13f E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x0000000017dbac38 E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x00000000195edc92 E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x000000001960c2e8 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000001d16e0a3 E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000001d16e912 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:202: Poco::PooledThread::run() @ 0x000000001d3705c7 E 17. ./build_docker/./base/poco/Foundation/src/Thread.cpp:46: Poco::(anonymous namespace)::RunnableHolder::run() @ 0x000000001d36e890 E 18. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000001d36cd4a E 19. __tsan_thread_start_func @ 0x000000000706b0cf E 20. ? @ 0x00007f58a6845ac3 E 21. ? @ 0x00007f58a68d7850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.postgresql_replica` order by key1;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica" ( key1 Integer NOT NULL, value1 Integer, key2 Integer NOT NULL, value2 Integer NOT NULL) Checking table postgresql_replica exists in test_database Checking table is synchronized: test_database.postgresql_replica ------------------------------ Captured log call ------------------------------- 2025-06-08 17:46:30 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica SELECT number, number, number, number from numbers(50, 10) on instance (cluster.py:3455, query) 2025-06-08 17:46:30 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:46:30 [ 496 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:46:30 [ 496 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:46:30 [ 496 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica SELECT number, number, number, number from numbers(100, 10) on instance (cluster.py:3455, query) 2025-06-08 17:46:31 [ 496 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:46:31 [ 496 ] DEBUG : Executing query select * from `postgres_database`.`postgresql_replica` order by key1; on instance (cluster.py:3455, query) 2025-06-08 17:46:31 [ 496 ] DEBUG : Executing query select * from `test_database.postgresql_replica` order by key1; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:46:31 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:46:31 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:46:32 [ 496 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:46:32 [ 496 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:46:32 [ 496 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_1/.env', '--project-name', 'roottestpostgresqlreplicadatabaseengine1', '--file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_1/instance/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml', 'stop', '--timeout', '20'] (cluster.py:113, run_and_check) 2025-06-08 17:46:33 [ 496 ] DEBUG : Stderr:Stopping roottestpostgresqlreplicadatabaseengine1_instance_1 ... (cluster.py:123, run_and_check) 2025-06-08 17:46:33 [ 496 ] DEBUG : Stderr:Stopping roottestpostgresqlreplicadatabaseengine1_postgres1_1 ... (cluster.py:123, run_and_check) 2025-06-08 17:46:33 [ 496 ] DEBUG : Stderr:Stopping roottestpostgresqlreplicadatabaseengine1_postgres1_1 ... done (cluster.py:123, run_and_check) 2025-06-08 17:46:33 [ 496 ] DEBUG : Stderr:Stopping roottestpostgresqlreplicadatabaseengine1_instance_1 ... done (cluster.py:123, run_and_check) 2025-06-08 17:46:33 [ 496 ] DEBUG : Command:['bash', '-c', '[ -f /ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_1/instance/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_1/instance/logs/stderr.log* || true'] (cluster.py:113, run_and_check) 2025-06-08 17:46:33 [ 496 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_1/.env', '--project-name', 'roottestpostgresqlreplicadatabaseengine1', '--file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_1/instance/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml', 'down', '--volumes'] (cluster.py:113, run_and_check) 2025-06-08 17:46:34 [ 496 ] DEBUG : Stderr:Removing roottestpostgresqlreplicadatabaseengine1_instance_1 ... (cluster.py:123, run_and_check) 2025-06-08 17:46:34 [ 496 ] DEBUG : Stderr:Removing roottestpostgresqlreplicadatabaseengine1_postgres1_1 ... (cluster.py:123, run_and_check) 2025-06-08 17:46:34 [ 496 ] DEBUG : Stderr:Removing roottestpostgresqlreplicadatabaseengine1_instance_1 ... done (cluster.py:123, run_and_check) 2025-06-08 17:46:34 [ 496 ] DEBUG : Stderr:Removing roottestpostgresqlreplicadatabaseengine1_postgres1_1 ... done (cluster.py:123, run_and_check) 2025-06-08 17:46:34 [ 496 ] DEBUG : Stderr:Removing network roottestpostgresqlreplicadatabaseengine1_default (cluster.py:123, run_and_check) 2025-06-08 17:46:34 [ 496 ] DEBUG : Cleanup called (cluster.py:801, cleanup) 2025-06-08 17:46:34 [ 496 ] DEBUG : Docker networks for project roottestpostgresqlreplicadatabaseengine1 are NETWORK ID NAME DRIVER SCOPE (cluster.py:780, print_all_docker_pieces) 2025-06-08 17:46:34 [ 496 ] DEBUG : Docker containers for project roottestpostgresqlreplicadatabaseengine1 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:788, print_all_docker_pieces) 2025-06-08 17:46:34 [ 496 ] DEBUG : Docker volumes for project roottestpostgresqlreplicadatabaseengine1 are DRIVER VOLUME NAME (cluster.py:796, print_all_docker_pieces) 2025-06-08 17:46:34 [ 496 ] DEBUG : Command:docker container list --all --filter name='^/roottestpostgresqlreplicadatabaseengine1_.*_1$' --format '{{.ID}}:{{.Names}}' (cluster.py:113, run_and_check) 2025-06-08 17:46:34 [ 496 ] DEBUG : Unstopped containers: {} (cluster.py:815, cleanup) 2025-06-08 17:46:34 [ 496 ] DEBUG : No running containers for project: roottestpostgresqlreplicadatabaseengine1 (cluster.py:829, cleanup) 2025-06-08 17:46:34 [ 496 ] DEBUG : Trying to prune unused networks... (cluster.py:835, cleanup) 2025-06-08 17:46:34 [ 496 ] DEBUG : Trying to prune unused images... (cluster.py:851, cleanup) 2025-06-08 17:46:34 [ 496 ] DEBUG : Command:['docker', 'image', 'prune', '-f'] (cluster.py:113, run_and_check) 2025-06-08 17:46:34 [ 496 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:121, run_and_check) 2025-06-08 17:46:34 [ 496 ] DEBUG : Images pruned (cluster.py:854, cleanup) 2025-06-08 17:46:34 [ 496 ] DEBUG : Trying to prune unused volumes... (cluster.py:860, cleanup) 2025-06-08 17:46:34 [ 496 ] DEBUG : Command:['docker volume ls | wc -l'] (cluster.py:113, run_and_check) 2025-06-08 17:46:34 [ 496 ] DEBUG : Stdout:1 (cluster.py:121, run_and_check) =============================== warnings summary =============================== test_postgresql_replica_database_engine_1/test.py::test_many_concurrent_queries /usr/local/lib/python3.10/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-46 (attack) Traceback (most recent call last): File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/usr/lib/python3.10/threading.py", line 953, in run self._target(*self._args, **self._kwargs) File "/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/test.py", line 433, in attack cursor.execute(query_pool[query_id].format(random_table_name)) psycopg2.errors.NumericValueOutOfRange: integer out of range warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html ============================== slowest durations =============================== 29.51s setup test_merge_tree_load_parts/test.py::test_merge_tree_load_parts_filesystem_error 28.50s setup test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory0-node] 22.04s teardown test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory2-node] 16.00s setup test_postgresql_replica_database_engine_1/test.py::test_abrupt_connection_loss_while_heavy_replication 14.59s call test_postgresql_replica_database_engine_1/test.py::test_many_concurrent_queries 10.13s call test_postgresql_replica_database_engine_1/test.py::test_abrupt_server_restart_while_heavy_replication 7.93s call test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_subset_of_database_tables 7.17s call test_postgresql_replica_database_engine_1/test.py::test_multiple_databases 6.75s call test_postgresql_replica_database_engine_1/test.py::test_abrupt_connection_loss_while_heavy_replication 4.45s call test_postgresql_replica_database_engine_1/test.py::test_concurrent_transactions 3.53s teardown test_merge_tree_load_parts/test.py::test_merge_tree_load_parts_filesystem_error 3.36s teardown test_postgresql_replica_database_engine_1/test.py::test_concurrent_transactions 3.36s call test_postgresql_replica_database_engine_1/test.py::test_different_data_types 2.39s teardown test_postgresql_replica_database_engine_1/test.py::test_replica_identity_index 2.26s call test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_all_database_tables 2.25s call test_postgresql_replica_database_engine_1/test.py::test_clickhouse_restart 1.99s teardown test_postgresql_replica_database_engine_1/test.py::test_abrupt_connection_loss_while_heavy_replication 1.53s call test_postgresql_replica_database_engine_1/test.py::test_replica_identity_index 1.50s teardown test_postgresql_replica_database_engine_1/test.py::test_multiple_databases 1.48s call test_postgresql_replica_database_engine_1/test.py::test_changing_replica_identity_value 1.41s call test_postgresql_replica_database_engine_1/test.py::test_quoting_2 1.36s call test_postgresql_replica_database_engine_1/test.py::test_quoting_1 0.98s teardown test_postgresql_replica_database_engine_1/test.py::test_abrupt_server_restart_while_heavy_replication 0.89s teardown test_postgresql_replica_database_engine_1/test.py::test_many_concurrent_queries 0.84s teardown test_postgresql_replica_database_engine_1/test.py::test_different_data_types 0.78s teardown test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_subset_of_database_tables 0.74s teardown test_postgresql_replica_database_engine_1/test.py::test_quoting_2 0.73s teardown test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_all_database_tables 0.66s teardown test_postgresql_replica_database_engine_1/test.py::test_clickhouse_restart 0.66s teardown test_postgresql_replica_database_engine_1/test.py::test_changing_replica_identity_value 0.65s teardown test_postgresql_replica_database_engine_1/test.py::test_quoting_1 0.22s call test_merge_tree_load_parts/test.py::test_merge_tree_load_parts_filesystem_error 0.10s setup test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory1-node] 0.09s setup test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory2-node] 0.00s teardown test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory0-node] 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_clickhouse_restart 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_different_data_types 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_all_database_tables 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_changing_replica_identity_value 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_concurrent_transactions 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_many_concurrent_queries 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_abrupt_server_restart_while_heavy_replication 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_subset_of_database_tables 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_quoting_1 0.00s call test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory2-node] 0.00s call test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory0-node] 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_multiple_databases 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_quoting_2 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_replica_identity_index 0.00s call test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory1-node] 0.00s teardown test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory1-node] =========================== short test summary info ============================ FAILED test_postgresql_replica_database_engine_1/test.py::test_abrupt_connection_loss_while_heavy_replication FAILED test_postgresql_replica_database_engine_1/test.py::test_abrupt_server_restart_while_heavy_replication FAILED test_postgresql_replica_database_engine_1/test.py::test_changing_replica_identity_value FAILED test_postgresql_replica_database_engine_1/test.py::test_clickhouse_restart FAILED test_postgresql_replica_database_engine_1/test.py::test_concurrent_transactions FAILED test_postgresql_replica_database_engine_1/test.py::test_different_data_types FAILED test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_all_database_tables FAILED test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_subset_of_database_tables FAILED test_postgresql_replica_database_engine_1/test.py::test_many_concurrent_queries FAILED test_postgresql_replica_database_engine_1/test.py::test_multiple_databases FAILED test_postgresql_replica_database_engine_1/test.py::test_quoting_1 - he... FAILED test_postgresql_replica_database_engine_1/test.py::test_quoting_2 - he... FAILED test_postgresql_replica_database_engine_1/test.py::test_replica_identity_index SKIPPED [1] test_merge_tree_load_parts/test.py:227: Skip with debug build and sanitizers. This test intentionally triggers LOGICAL_ERROR which leads to crash with those builds SKIPPED [3] test_merge_tree_s3/test.py:931: Disabled, will be fixed after https://github.com/ClickHouse/ClickHouse/issues/51152 ============= 13 failed, 4 skipped, 1 warning in 182.04s (0:03:02) ============= Traceback (most recent call last): File "/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration/./runner", line 437, in subprocess.check_call(cmd, shell=True) File "/usr/lib/python3.10/subprocess.py", line 369, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command 'docker run --rm --name clickhouse_integration_tests_jhoknn --privileged --dns-search='.' --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-odbc-bridge:/clickhouse-odbc-bridge --volume=/home/ubuntu/_work/_temp/test/build/clickhouse:/clickhouse --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-library-bridge:/clickhouse-library-bridge --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/programs/server:/clickhouse-config --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration:/ClickHouse/tests/integration --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/backupview:/ClickHouse/utils/backupview --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/grpc-client/pb2:/ClickHouse/utils/grpc-client/pb2 --volume=/run:/run/host:ro --volume=clickhouse_integration_tests_volume:/var/lib/docker -e DOCKER_DOTNET_CLIENT_TAG=11de0b29a15d -e DOCKER_HELPER_TAG=2cffe1eae894 -e DOCKER_BASE_TAG=2993bc2bf171 -e DOCKER_KERBERIZED_HADOOP_TAG=ce74919e88f5 -e DOCKER_KERBEROS_KDC_TAG=9391ecdee8d7 -e DOCKER_MYSQL_GOLANG_CLIENT_TAG=9bec2a638e6e -e DOCKER_MYSQL_JAVA_CLIENT_TAG=766bff31cfe4 -e DOCKER_MYSQL_JS_CLIENT_TAG=41ba7c2ec2a1 -e DOCKER_MYSQL_PHP_CLIENT_TAG=88be89c1e3b6 -e DOCKER_NGINX_DAV_TAG=b55ac9cd7519 -e DOCKER_POSTGRESQL_JAVA_CLIENT_TAG=a4eff5c7f4d6 -e DOCKER_PYTHON_BOTTLE_TAG=a2d3dc777d0c -e DOCKER_CLIENT_TIMEOUT=300 -e COMPOSE_HTTP_TIMEOUT=600 -e PYTHONUNBUFFERED=1 -e PYTEST_ADDOPTS=" -rfEps --run-id=1 --color=no --durations=0 test_merge_tree_load_parts/test.py::test_merge_tree_load_parts_filesystem_error 'test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory0-node]' 'test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory1-node]' 'test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory2-node]' test_postgresql_replica_database_engine_1/test.py::test_abrupt_connection_loss_while_heavy_replication test_postgresql_replica_database_engine_1/test.py::test_abrupt_server_restart_while_heavy_replication test_postgresql_replica_database_engine_1/test.py::test_changing_replica_identity_value test_postgresql_replica_database_engine_1/test.py::test_clickhouse_restart test_postgresql_replica_database_engine_1/test.py::test_concurrent_transactions test_postgresql_replica_database_engine_1/test.py::test_different_data_types test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_all_database_tables test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_subset_of_database_tables test_postgresql_replica_database_engine_1/test.py::test_many_concurrent_queries test_postgresql_replica_database_engine_1/test.py::test_multiple_databases test_postgresql_replica_database_engine_1/test.py::test_quoting_1 test_postgresql_replica_database_engine_1/test.py::test_quoting_2 test_postgresql_replica_database_engine_1/test.py::test_replica_identity_index -vvv" altinityinfra/integration-tests-runner:9d492c2eec24 ' returned non-zero exit status 1.